Test Report: Hyper-V_Windows 18756

                    
                      159c0885aec790b0bc18754712c4d2a4038767fb:2024-04-29:34251
                    
                

Test fail (20/190)

x
+
TestAddons/parallel/Registry (72.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 19.6154ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-tvvqn" [cd91972e-7309-42de-972b-4e836b093c94] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0363766s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-24hnw" [556ba331-0f4c-4c0b-a8cb-9ceaf9b76463] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0228069s
addons_test.go:340: (dbg) Run:  kubectl --context addons-839400 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-839400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-839400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.8533884s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839400 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839400 ip: (3.1368738s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0429 10:46:43.704666    3512 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-839400 ip"
2024/04/29 10:46:46 [DEBUG] GET http://172.26.182.147:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839400 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839400 addons disable registry --alsologtostderr -v=1: (16.0548349s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-839400 -n addons-839400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-839400 -n addons-839400: (13.1936221s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839400 logs -n 25: (9.2770046s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-805300 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC |                     |
	|         | -p download-only-805300                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC | 29 Apr 24 10:39 UTC |
	| delete  | -p download-only-805300                                                                     | download-only-805300 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC | 29 Apr 24 10:39 UTC |
	| start   | -o=json --download-only                                                                     | download-only-614800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC |                     |
	|         | -p download-only-614800                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC | 29 Apr 24 10:39 UTC |
	| delete  | -p download-only-614800                                                                     | download-only-614800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC | 29 Apr 24 10:39 UTC |
	| delete  | -p download-only-805300                                                                     | download-only-805300 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC | 29 Apr 24 10:39 UTC |
	| delete  | -p download-only-614800                                                                     | download-only-614800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC | 29 Apr 24 10:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-922900 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC |                     |
	|         | binary-mirror-922900                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:56167                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-922900                                                                     | binary-mirror-922900 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC | 29 Apr 24 10:39 UTC |
	| addons  | disable dashboard -p                                                                        | addons-839400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC |                     |
	|         | addons-839400                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-839400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC |                     |
	|         | addons-839400                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-839400 --wait=true                                                                | addons-839400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC | 29 Apr 24 10:46 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-839400 addons                                                                        | addons-839400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:46 UTC | 29 Apr 24 10:46 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-839400 ssh cat                                                                       | addons-839400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:46 UTC | 29 Apr 24 10:46 UTC |
	|         | /opt/local-path-provisioner/pvc-728dcdb0-c080-4102-9c29-17ac82cdab32_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-839400 ip                                                                            | addons-839400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:46 UTC | 29 Apr 24 10:46 UTC |
	| addons  | addons-839400 addons disable                                                                | addons-839400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:46 UTC | 29 Apr 24 10:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-839400 addons disable                                                                | addons-839400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:46 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-839400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:46 UTC | 29 Apr 24 10:47 UTC |
	|         | addons-839400                                                                               |                      |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-839400        | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:47 UTC |                     |
	|         | -p addons-839400                                                                            |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 10:39:55
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 10:39:55.515103   10056 out.go:291] Setting OutFile to fd 916 ...
	I0429 10:39:55.515708   10056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 10:39:55.515708   10056 out.go:304] Setting ErrFile to fd 920...
	I0429 10:39:55.515708   10056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 10:39:55.540280   10056 out.go:298] Setting JSON to false
	I0429 10:39:55.543195   10056 start.go:129] hostinfo: {"hostname":"minikube6","uptime":28668,"bootTime":1714358527,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 10:39:55.543195   10056 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 10:39:55.561488   10056 out.go:177] * [addons-839400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 10:39:55.568150   10056 notify.go:220] Checking for updates...
	I0429 10:39:55.571457   10056 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 10:39:55.576472   10056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 10:39:55.580828   10056 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 10:39:55.583446   10056 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 10:39:55.585927   10056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 10:39:55.588408   10056 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 10:40:00.956760   10056 out.go:177] * Using the hyperv driver based on user configuration
	I0429 10:40:00.959986   10056 start.go:297] selected driver: hyperv
	I0429 10:40:00.959986   10056 start.go:901] validating driver "hyperv" against <nil>
	I0429 10:40:00.959986   10056 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 10:40:01.008435   10056 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 10:40:01.008435   10056 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 10:40:01.008435   10056 cni.go:84] Creating CNI manager for ""
	I0429 10:40:01.008435   10056 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 10:40:01.008435   10056 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 10:40:01.008435   10056 start.go:340] cluster config:
	{Name:addons-839400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-839400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 10:40:01.008435   10056 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 10:40:01.016008   10056 out.go:177] * Starting "addons-839400" primary control-plane node in "addons-839400" cluster
	I0429 10:40:01.019177   10056 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 10:40:01.019177   10056 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 10:40:01.019177   10056 cache.go:56] Caching tarball of preloaded images
	I0429 10:40:01.019707   10056 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 10:40:01.019824   10056 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 10:40:01.019824   10056 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\config.json ...
	I0429 10:40:01.020547   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\config.json: {Name:mk7084bb42f8d1de539bc9d65893972023af0c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:40:01.021252   10056 start.go:360] acquireMachinesLock for addons-839400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 10:40:01.021983   10056 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-839400"
	I0429 10:40:01.021983   10056 start.go:93] Provisioning new machine with config: &{Name:addons-839400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:addons-839400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 10:40:01.021983   10056 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 10:40:01.025279   10056 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0429 10:40:01.025930   10056 start.go:159] libmachine.API.Create for "addons-839400" (driver="hyperv")
	I0429 10:40:01.025930   10056 client.go:168] LocalClient.Create starting
	I0429 10:40:01.027054   10056 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 10:40:01.218798   10056 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 10:40:01.586501   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 10:40:03.981396   10056 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 10:40:03.981396   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:03.981396   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 10:40:05.716663   10056 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 10:40:05.716663   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:05.717031   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 10:40:07.224739   10056 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 10:40:07.224817   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:07.224817   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 10:40:11.024439   10056 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 10:40:11.024439   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:11.028232   10056 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 10:40:11.556792   10056 main.go:141] libmachine: Creating SSH key...
	I0429 10:40:12.093005   10056 main.go:141] libmachine: Creating VM...
	I0429 10:40:12.093348   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 10:40:14.896564   10056 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 10:40:14.897047   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:14.897226   10056 main.go:141] libmachine: Using switch "Default Switch"
	I0429 10:40:14.897316   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 10:40:16.659108   10056 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 10:40:16.659108   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:16.659108   10056 main.go:141] libmachine: Creating VHD
	I0429 10:40:16.659108   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 10:40:20.277093   10056 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E153E36F-231B-4AB3-94D0-150D8F271A7C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 10:40:20.277093   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:20.277093   10056 main.go:141] libmachine: Writing magic tar header
	I0429 10:40:20.277093   10056 main.go:141] libmachine: Writing SSH key tar header
	I0429 10:40:20.287683   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 10:40:23.425821   10056 main.go:141] libmachine: [stdout =====>] : 
	I0429 10:40:23.426874   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:23.426928   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\disk.vhd' -SizeBytes 20000MB
	I0429 10:40:25.868412   10056 main.go:141] libmachine: [stdout =====>] : 
	I0429 10:40:25.868412   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:25.868492   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-839400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0429 10:40:29.487335   10056 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-839400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 10:40:29.488134   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:29.488134   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-839400 -DynamicMemoryEnabled $false
	I0429 10:40:31.610644   10056 main.go:141] libmachine: [stdout =====>] : 
	I0429 10:40:31.610811   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:31.610811   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-839400 -Count 2
	I0429 10:40:33.725817   10056 main.go:141] libmachine: [stdout =====>] : 
	I0429 10:40:33.725817   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:33.726314   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-839400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\boot2docker.iso'
	I0429 10:40:36.241469   10056 main.go:141] libmachine: [stdout =====>] : 
	I0429 10:40:36.241469   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:36.241699   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-839400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\disk.vhd'
	I0429 10:40:38.901708   10056 main.go:141] libmachine: [stdout =====>] : 
	I0429 10:40:38.902250   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:38.902250   10056 main.go:141] libmachine: Starting VM...
	I0429 10:40:38.902320   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-839400
	I0429 10:40:42.049373   10056 main.go:141] libmachine: [stdout =====>] : 
	I0429 10:40:42.049373   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:42.049373   10056 main.go:141] libmachine: Waiting for host to start...
	I0429 10:40:42.049373   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:40:44.260734   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:40:44.260734   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:44.261409   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:40:46.724967   10056 main.go:141] libmachine: [stdout =====>] : 
	I0429 10:40:46.725130   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:47.737915   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:40:49.877788   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:40:49.878407   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:49.878561   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:40:52.365715   10056 main.go:141] libmachine: [stdout =====>] : 
	I0429 10:40:52.365715   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:53.378319   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:40:55.463958   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:40:55.464712   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:55.464712   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:40:57.888608   10056 main.go:141] libmachine: [stdout =====>] : 
	I0429 10:40:57.888608   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:40:58.893898   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:41:01.018314   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:41:01.019291   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:01.019389   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:41:03.439825   10056 main.go:141] libmachine: [stdout =====>] : 
	I0429 10:41:03.440175   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:04.445917   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:41:06.599043   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:41:06.599043   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:06.599560   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:41:09.120765   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:41:09.120765   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:09.120765   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:41:11.185220   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:41:11.185220   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:11.185865   10056 machine.go:94] provisionDockerMachine start ...
	I0429 10:41:11.186058   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:41:13.321501   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:41:13.321501   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:13.321501   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:41:15.759826   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:41:15.760466   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:15.766353   10056 main.go:141] libmachine: Using SSH client type: native
	I0429 10:41:15.776913   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.182.147 22 <nil> <nil>}
	I0429 10:41:15.776913   10056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 10:41:15.903282   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 10:41:15.903282   10056 buildroot.go:166] provisioning hostname "addons-839400"
	I0429 10:41:15.903282   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:41:17.982439   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:41:17.982439   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:17.983239   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:41:20.434992   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:41:20.435532   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:20.441336   10056 main.go:141] libmachine: Using SSH client type: native
	I0429 10:41:20.441903   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.182.147 22 <nil> <nil>}
	I0429 10:41:20.441903   10056 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-839400 && echo "addons-839400" | sudo tee /etc/hostname
	I0429 10:41:20.595286   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-839400
	
	I0429 10:41:20.595833   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:41:22.580673   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:41:22.580673   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:22.581182   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:41:25.019444   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:41:25.019444   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:25.025573   10056 main.go:141] libmachine: Using SSH client type: native
	I0429 10:41:25.026286   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.182.147 22 <nil> <nil>}
	I0429 10:41:25.026286   10056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-839400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-839400/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-839400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 10:41:25.166849   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 10:41:25.167029   10056 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 10:41:25.167087   10056 buildroot.go:174] setting up certificates
	I0429 10:41:25.167132   10056 provision.go:84] configureAuth start
	I0429 10:41:25.167189   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:41:27.226958   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:41:27.226958   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:27.227210   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:41:29.646991   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:41:29.647201   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:29.647201   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:41:31.698022   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:41:31.698022   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:31.698309   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:41:34.205259   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:41:34.205259   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:34.205366   10056 provision.go:143] copyHostCerts
	I0429 10:41:34.205366   10056 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 10:41:34.207152   10056 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 10:41:34.208656   10056 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 10:41:34.209809   10056 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-839400 san=[127.0.0.1 172.26.182.147 addons-839400 localhost minikube]
	I0429 10:41:34.390182   10056 provision.go:177] copyRemoteCerts
	I0429 10:41:34.404263   10056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 10:41:34.405258   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:41:36.448654   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:41:36.449178   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:36.449496   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:41:38.953326   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:41:38.953326   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:38.954044   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:41:39.057521   10056 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.652225s)
	I0429 10:41:39.057521   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 10:41:39.108864   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 10:41:39.159445   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 10:41:39.212210   10056 provision.go:87] duration metric: took 14.0449647s to configureAuth
	I0429 10:41:39.212210   10056 buildroot.go:189] setting minikube options for container-runtime
	I0429 10:41:39.212641   10056 config.go:182] Loaded profile config "addons-839400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 10:41:39.212641   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:41:41.292580   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:41:41.293007   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:41.293007   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:41:43.811035   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:41:43.811035   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:43.817126   10056 main.go:141] libmachine: Using SSH client type: native
	I0429 10:41:43.818017   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.182.147 22 <nil> <nil>}
	I0429 10:41:43.818017   10056 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 10:41:43.947145   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 10:41:43.947145   10056 buildroot.go:70] root file system type: tmpfs
	I0429 10:41:43.947145   10056 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 10:41:43.947739   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:41:45.994997   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:41:45.995717   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:45.995777   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:41:48.483198   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:41:48.483198   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:48.489351   10056 main.go:141] libmachine: Using SSH client type: native
	I0429 10:41:48.490108   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.182.147 22 <nil> <nil>}
	I0429 10:41:48.490108   10056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 10:41:48.658187   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 10:41:48.658883   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:41:50.707795   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:41:50.708091   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:50.708091   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:41:53.219280   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:41:53.219976   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:53.225503   10056 main.go:141] libmachine: Using SSH client type: native
	I0429 10:41:53.226303   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.182.147 22 <nil> <nil>}
	I0429 10:41:53.226303   10056 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 10:41:55.371578   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 10:41:55.371578   10056 machine.go:97] duration metric: took 44.1853558s to provisionDockerMachine
	I0429 10:41:55.371578   10056 client.go:171] duration metric: took 1m54.3447302s to LocalClient.Create
	I0429 10:41:55.371578   10056 start.go:167] duration metric: took 1m54.3447302s to libmachine.API.Create "addons-839400"
	I0429 10:41:55.372204   10056 start.go:293] postStartSetup for "addons-839400" (driver="hyperv")
	I0429 10:41:55.372204   10056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 10:41:55.386650   10056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 10:41:55.386650   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:41:57.485213   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:41:57.485924   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:57.485990   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:41:59.980758   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:41:59.980949   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:41:59.981664   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:42:00.089884   10056 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7031965s)
	I0429 10:42:00.104957   10056 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 10:42:00.113307   10056 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 10:42:00.113432   10056 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 10:42:00.114055   10056 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 10:42:00.114222   10056 start.go:296] duration metric: took 4.7419794s for postStartSetup
	I0429 10:42:00.117726   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:42:02.130809   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:42:02.131844   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:42:02.131875   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:42:04.619466   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:42:04.620306   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:42:04.620529   10056 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\config.json ...
	I0429 10:42:04.624703   10056 start.go:128] duration metric: took 2m3.6017273s to createHost
	I0429 10:42:04.624923   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:42:06.669801   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:42:06.669801   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:42:06.670207   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:42:09.179397   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:42:09.179397   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:42:09.185383   10056 main.go:141] libmachine: Using SSH client type: native
	I0429 10:42:09.186182   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.182.147 22 <nil> <nil>}
	I0429 10:42:09.186182   10056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 10:42:09.313267   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714387329.312036824
	
	I0429 10:42:09.313267   10056 fix.go:216] guest clock: 1714387329.312036824
	I0429 10:42:09.313803   10056 fix.go:229] Guest: 2024-04-29 10:42:09.312036824 +0000 UTC Remote: 2024-04-29 10:42:04.6248081 +0000 UTC m=+129.299092701 (delta=4.687228724s)
	I0429 10:42:09.313964   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:42:11.388934   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:42:11.389374   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:42:11.389374   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:42:13.826926   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:42:13.826926   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:42:13.834360   10056 main.go:141] libmachine: Using SSH client type: native
	I0429 10:42:13.834495   10056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.182.147 22 <nil> <nil>}
	I0429 10:42:13.834495   10056 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714387329
	I0429 10:42:13.981380   10056 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 10:42:09 UTC 2024
	
	I0429 10:42:13.981380   10056 fix.go:236] clock set: Mon Apr 29 10:42:09 UTC 2024
	 (err=<nil>)
	I0429 10:42:13.981499   10056 start.go:83] releasing machines lock for "addons-839400", held for 2m12.958448s
	I0429 10:42:13.981577   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:42:16.029292   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:42:16.029292   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:42:16.029495   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:42:18.480680   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:42:18.480753   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:42:18.485455   10056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 10:42:18.485455   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:42:18.497578   10056 ssh_runner.go:195] Run: cat /version.json
	I0429 10:42:18.497578   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:42:20.628029   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:42:20.628174   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:42:20.628174   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:42:20.628174   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:42:20.628174   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:42:20.628174   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:42:23.204140   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:42:23.205323   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:42:23.205523   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:42:23.231052   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:42:23.231111   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:42:23.231948   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:42:23.389702   10056 ssh_runner.go:235] Completed: cat /version.json: (4.8920838s)
	I0429 10:42:23.389702   10056 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9042072s)
	I0429 10:42:23.404997   10056 ssh_runner.go:195] Run: systemctl --version
	I0429 10:42:23.429634   10056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 10:42:23.437581   10056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 10:42:23.451009   10056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 10:42:23.483346   10056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 10:42:23.483346   10056 start.go:494] detecting cgroup driver to use...
	I0429 10:42:23.483977   10056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 10:42:23.533674   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 10:42:23.574578   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 10:42:23.596287   10056 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 10:42:23.610892   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 10:42:23.653909   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 10:42:23.691637   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 10:42:23.727011   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 10:42:23.761611   10056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 10:42:23.798670   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 10:42:23.832291   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 10:42:23.870175   10056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 10:42:23.903185   10056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 10:42:23.936888   10056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 10:42:23.970843   10056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 10:42:24.193691   10056 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 10:42:24.225526   10056 start.go:494] detecting cgroup driver to use...
	I0429 10:42:24.241145   10056 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 10:42:24.287729   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 10:42:24.325558   10056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 10:42:24.377407   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 10:42:24.421887   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 10:42:24.464602   10056 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 10:42:24.531664   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 10:42:24.558395   10056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 10:42:24.611565   10056 ssh_runner.go:195] Run: which cri-dockerd
	I0429 10:42:24.632031   10056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 10:42:24.651191   10056 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 10:42:24.699945   10056 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 10:42:24.921882   10056 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 10:42:25.117954   10056 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 10:42:25.118238   10056 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 10:42:25.169480   10056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 10:42:25.383941   10056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 10:42:27.924132   10056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5401703s)
	I0429 10:42:27.938678   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 10:42:27.975601   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 10:42:28.012984   10056 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 10:42:28.228021   10056 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 10:42:28.441587   10056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 10:42:28.650231   10056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 10:42:28.697432   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 10:42:28.735868   10056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 10:42:28.949369   10056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 10:42:29.055507   10056 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 10:42:29.069638   10056 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 10:42:29.080562   10056 start.go:562] Will wait 60s for crictl version
	I0429 10:42:29.094133   10056 ssh_runner.go:195] Run: which crictl
	I0429 10:42:29.111280   10056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 10:42:29.174510   10056 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 10:42:29.185479   10056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 10:42:29.229568   10056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 10:42:29.263106   10056 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 10:42:29.263347   10056 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 10:42:29.267317   10056 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 10:42:29.267317   10056 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 10:42:29.267317   10056 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 10:42:29.267893   10056 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:da:8e:53 Flags:up|broadcast|multicast|running}
	I0429 10:42:29.271404   10056 ip.go:210] interface addr: fe80::e4d4:6d70:21fb:68f3/64
	I0429 10:42:29.271503   10056 ip.go:210] interface addr: 172.26.176.1/20
	I0429 10:42:29.283419   10056 ssh_runner.go:195] Run: grep 172.26.176.1	host.minikube.internal$ /etc/hosts
	I0429 10:42:29.290813   10056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 10:42:29.311555   10056 kubeadm.go:877] updating cluster {Name:addons-839400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:addons-839400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.182.147 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 10:42:29.314232   10056 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 10:42:29.325996   10056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 10:42:29.349594   10056 docker.go:685] Got preloaded images: 
	I0429 10:42:29.349594   10056 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 10:42:29.364772   10056 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 10:42:29.396323   10056 ssh_runner.go:195] Run: which lz4
	I0429 10:42:29.419439   10056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 10:42:29.426772   10056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 10:42:29.427011   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 10:42:31.386274   10056 docker.go:649] duration metric: took 1.9805831s to copy over tarball
	I0429 10:42:31.400289   10056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 10:42:36.664754   10056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.2644221s)
	I0429 10:42:36.664754   10056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 10:42:36.733730   10056 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 10:42:36.754884   10056 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 10:42:36.801157   10056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 10:42:37.014033   10056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 10:42:42.633169   10056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.6190911s)
	I0429 10:42:42.647136   10056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 10:42:42.673576   10056 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 10:42:42.673656   10056 cache_images.go:84] Images are preloaded, skipping loading
	I0429 10:42:42.673718   10056 kubeadm.go:928] updating node { 172.26.182.147 8443 v1.30.0 docker true true} ...
	I0429 10:42:42.674206   10056 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-839400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.182.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-839400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 10:42:42.684976   10056 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 10:42:42.721115   10056 cni.go:84] Creating CNI manager for ""
	I0429 10:42:42.721182   10056 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 10:42:42.721182   10056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 10:42:42.721253   10056 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.182.147 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-839400 NodeName:addons-839400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.182.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.182.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 10:42:42.721513   10056 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.182.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-839400"
	  kubeletExtraArgs:
	    node-ip: 172.26.182.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.182.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 10:42:42.734375   10056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 10:42:42.752361   10056 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 10:42:42.764402   10056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 10:42:42.781385   10056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0429 10:42:42.813400   10056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 10:42:42.846530   10056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 10:42:42.891794   10056 ssh_runner.go:195] Run: grep 172.26.182.147	control-plane.minikube.internal$ /etc/hosts
	I0429 10:42:42.898635   10056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.182.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 10:42:42.934283   10056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 10:42:43.152146   10056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 10:42:43.183444   10056 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400 for IP: 172.26.182.147
	I0429 10:42:43.183496   10056 certs.go:194] generating shared ca certs ...
	I0429 10:42:43.183496   10056 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:42:43.183946   10056 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 10:42:43.299537   10056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt ...
	I0429 10:42:43.299537   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt: {Name:mkb0ebdce3b528a3c449211fdfbba2d86c130c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:42:43.303539   10056 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key ...
	I0429 10:42:43.304549   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key: {Name:mk1ec59eaa4c2f7a35370569c3fc13a80bc1499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:42:43.305542   10056 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 10:42:43.597628   10056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0429 10:42:43.597628   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk78efc1a7bd38719c2f7a853f9109f9a1a3252e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:42:43.599676   10056 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key ...
	I0429 10:42:43.599676   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk57de77abeaf23b535083770f5522a07b562b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:42:43.601056   10056 certs.go:256] generating profile certs ...
	I0429 10:42:43.601372   10056 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.key
	I0429 10:42:43.601372   10056 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt with IP's: []
	I0429 10:42:43.788351   10056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt ...
	I0429 10:42:43.788351   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: {Name:mkc37b558fb94b1a874d8d62d7e1b40b1d652a74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:42:43.789942   10056 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.key ...
	I0429 10:42:43.789942   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.key: {Name:mkbe9e0f6e8dccaae21f95e40ee739c58fe110fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:42:43.791782   10056 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\apiserver.key.21ca8031
	I0429 10:42:43.791969   10056 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\apiserver.crt.21ca8031 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.26.182.147]
	I0429 10:42:44.075938   10056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\apiserver.crt.21ca8031 ...
	I0429 10:42:44.075938   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\apiserver.crt.21ca8031: {Name:mkdd5731b0f6b6513778f851857baca772e04057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:42:44.078021   10056 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\apiserver.key.21ca8031 ...
	I0429 10:42:44.078021   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\apiserver.key.21ca8031: {Name:mk19d3d4ede05ee60be1ff062f0c6a5189dd798d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:42:44.078935   10056 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\apiserver.crt.21ca8031 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\apiserver.crt
	I0429 10:42:44.090968   10056 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\apiserver.key.21ca8031 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\apiserver.key
	I0429 10:42:44.091545   10056 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\proxy-client.key
	I0429 10:42:44.092582   10056 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\proxy-client.crt with IP's: []
	I0429 10:42:44.237393   10056 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\proxy-client.crt ...
	I0429 10:42:44.237393   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\proxy-client.crt: {Name:mkc6eb1aedfcf2cab142732d150bfceca3cd3f3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:42:44.239065   10056 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\proxy-client.key ...
	I0429 10:42:44.239065   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\proxy-client.key: {Name:mk89e104d9ba1842ecd3b71ee6a357a1868627cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:42:44.249938   10056 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0429 10:42:44.250084   10056 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 10:42:44.250084   10056 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 10:42:44.250877   10056 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 10:42:44.252912   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 10:42:44.301873   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 10:42:44.357263   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 10:42:44.406487   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 10:42:44.457406   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 10:42:44.507618   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 10:42:44.556659   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 10:42:44.608892   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 10:42:44.656039   10056 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 10:42:44.704120   10056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 10:42:44.751164   10056 ssh_runner.go:195] Run: openssl version
	I0429 10:42:44.774726   10056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 10:42:44.811254   10056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 10:42:44.817716   10056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 10:42:44.831658   10056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 10:42:44.857683   10056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 10:42:44.890714   10056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 10:42:44.897721   10056 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 10:42:44.897721   10056 kubeadm.go:391] StartCluster: {Name:addons-839400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:addons-839400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.182.147 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 10:42:44.908274   10056 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 10:42:44.943998   10056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 10:42:44.982851   10056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 10:42:45.015110   10056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 10:42:45.034040   10056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 10:42:45.034110   10056 kubeadm.go:156] found existing configuration files:
	
	I0429 10:42:45.050030   10056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 10:42:45.068820   10056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 10:42:45.081301   10056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 10:42:45.112879   10056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 10:42:45.131448   10056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 10:42:45.144714   10056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 10:42:45.175406   10056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 10:42:45.192571   10056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 10:42:45.205413   10056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 10:42:45.238008   10056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 10:42:45.255981   10056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 10:42:45.269424   10056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 10:42:45.288110   10056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 10:42:45.547972   10056 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 10:42:59.764887   10056 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 10:42:59.765056   10056 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 10:42:59.765123   10056 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 10:42:59.765413   10056 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 10:42:59.765812   10056 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 10:42:59.766087   10056 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 10:42:59.769583   10056 out.go:204]   - Generating certificates and keys ...
	I0429 10:42:59.769715   10056 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 10:42:59.769961   10056 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 10:42:59.770153   10056 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 10:42:59.770296   10056 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 10:42:59.770353   10056 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 10:42:59.770353   10056 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 10:42:59.770353   10056 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 10:42:59.770954   10056 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-839400 localhost] and IPs [172.26.182.147 127.0.0.1 ::1]
	I0429 10:42:59.771099   10056 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 10:42:59.771099   10056 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-839400 localhost] and IPs [172.26.182.147 127.0.0.1 ::1]
	I0429 10:42:59.771099   10056 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 10:42:59.771099   10056 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 10:42:59.771697   10056 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 10:42:59.771837   10056 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 10:42:59.771837   10056 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 10:42:59.771837   10056 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 10:42:59.771837   10056 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 10:42:59.771837   10056 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 10:42:59.772408   10056 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 10:42:59.772551   10056 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 10:42:59.772551   10056 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 10:42:59.775049   10056 out.go:204]   - Booting up control plane ...
	I0429 10:42:59.775049   10056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 10:42:59.776002   10056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 10:42:59.776002   10056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 10:42:59.776002   10056 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 10:42:59.776002   10056 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 10:42:59.776002   10056 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 10:42:59.776002   10056 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 10:42:59.776002   10056 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 10:42:59.776002   10056 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.003167016s
	I0429 10:42:59.777521   10056 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 10:42:59.777521   10056 kubeadm.go:309] [api-check] The API server is healthy after 7.003337083s
	I0429 10:42:59.777521   10056 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 10:42:59.778073   10056 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 10:42:59.778294   10056 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 10:42:59.778294   10056 kubeadm.go:309] [mark-control-plane] Marking the node addons-839400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 10:42:59.778826   10056 kubeadm.go:309] [bootstrap-token] Using token: xsp3jd.3o7pc33vexavlbop
	I0429 10:42:59.781079   10056 out.go:204]   - Configuring RBAC rules ...
	I0429 10:42:59.781079   10056 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 10:42:59.781726   10056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 10:42:59.781726   10056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 10:42:59.782318   10056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 10:42:59.782537   10056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 10:42:59.782733   10056 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 10:42:59.782984   10056 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 10:42:59.782984   10056 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 10:42:59.783400   10056 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 10:42:59.783400   10056 kubeadm.go:309] 
	I0429 10:42:59.783400   10056 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 10:42:59.783400   10056 kubeadm.go:309] 
	I0429 10:42:59.783400   10056 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 10:42:59.783400   10056 kubeadm.go:309] 
	I0429 10:42:59.783400   10056 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 10:42:59.783400   10056 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 10:42:59.783400   10056 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 10:42:59.783400   10056 kubeadm.go:309] 
	I0429 10:42:59.784119   10056 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 10:42:59.784218   10056 kubeadm.go:309] 
	I0429 10:42:59.784339   10056 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 10:42:59.784339   10056 kubeadm.go:309] 
	I0429 10:42:59.784339   10056 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 10:42:59.784339   10056 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 10:42:59.784715   10056 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 10:42:59.784715   10056 kubeadm.go:309] 
	I0429 10:42:59.784807   10056 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 10:42:59.784807   10056 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 10:42:59.784807   10056 kubeadm.go:309] 
	I0429 10:42:59.784807   10056 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xsp3jd.3o7pc33vexavlbop \
	I0429 10:42:59.785435   10056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a \
	I0429 10:42:59.785435   10056 kubeadm.go:309] 	--control-plane 
	I0429 10:42:59.785435   10056 kubeadm.go:309] 
	I0429 10:42:59.785847   10056 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 10:42:59.785847   10056 kubeadm.go:309] 
	I0429 10:42:59.786061   10056 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xsp3jd.3o7pc33vexavlbop \
	I0429 10:42:59.786243   10056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a 
	I0429 10:42:59.786243   10056 cni.go:84] Creating CNI manager for ""
	I0429 10:42:59.786243   10056 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 10:42:59.791350   10056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 10:42:59.805823   10056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 10:42:59.830170   10056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 10:42:59.874606   10056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 10:42:59.890810   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-839400 minikube.k8s.io/updated_at=2024_04_29T10_42_59_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d minikube.k8s.io/name=addons-839400 minikube.k8s.io/primary=true
	I0429 10:42:59.890810   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:42:59.898502   10056 ops.go:34] apiserver oom_adj: -16
	I0429 10:43:00.092647   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:00.605176   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:01.107579   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:01.592335   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:02.106449   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:02.591519   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:03.097900   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:03.600904   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:04.100748   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:04.601266   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:05.093081   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:05.595323   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:06.096798   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:06.600056   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:07.100696   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:07.604819   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:08.106928   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:08.593026   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:09.095500   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:09.598621   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:10.106780   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:10.602105   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:11.095795   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:11.594687   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:12.099724   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:12.605660   10056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 10:43:12.753093   10056 kubeadm.go:1107] duration metric: took 12.8782115s to wait for elevateKubeSystemPrivileges
	W0429 10:43:12.753252   10056 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 10:43:12.753252   10056 kubeadm.go:393] duration metric: took 27.8553066s to StartCluster
	I0429 10:43:12.753252   10056 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:43:12.753585   10056 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 10:43:12.754591   10056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:43:12.756266   10056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 10:43:12.756340   10056 start.go:234] Will wait 6m0s for node &{Name: IP:172.26.182.147 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 10:43:12.758931   10056 out.go:177] * Verifying Kubernetes components...
	I0429 10:43:12.756465   10056 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0429 10:43:12.756748   10056 config.go:182] Loaded profile config "addons-839400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 10:43:12.759842   10056 addons.go:69] Setting metrics-server=true in profile "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting ingress-dns=true in profile "addons-839400"
	I0429 10:43:12.761844   10056 addons.go:234] Setting addon ingress-dns=true in "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting yakd=true in profile "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting inspektor-gadget=true in profile "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting default-storageclass=true in profile "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting registry=true in profile "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting cloud-spanner=true in profile "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting storage-provisioner=true in profile "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting gcp-auth=true in profile "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting volumesnapshots=true in profile "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting helm-tiller=true in profile "addons-839400"
	I0429 10:43:12.759842   10056 addons.go:69] Setting ingress=true in profile "addons-839400"
	I0429 10:43:12.761844   10056 addons.go:234] Setting addon metrics-server=true in "addons-839400"
	I0429 10:43:12.761844   10056 addons.go:234] Setting addon registry=true in "addons-839400"
	I0429 10:43:12.761844   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:12.761844   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:12.761844   10056 addons.go:234] Setting addon yakd=true in "addons-839400"
	I0429 10:43:12.761844   10056 addons.go:234] Setting addon storage-provisioner=true in "addons-839400"
	I0429 10:43:12.761844   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:12.762831   10056 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-839400"
	I0429 10:43:12.762831   10056 addons.go:234] Setting addon cloud-spanner=true in "addons-839400"
	I0429 10:43:12.762831   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:12.762831   10056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-839400"
	I0429 10:43:12.762831   10056 addons.go:234] Setting addon volumesnapshots=true in "addons-839400"
	I0429 10:43:12.762831   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:12.762831   10056 mustload.go:65] Loading cluster: addons-839400
	I0429 10:43:12.763830   10056 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-839400"
	I0429 10:43:12.763830   10056 config.go:182] Loaded profile config "addons-839400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 10:43:12.761844   10056 addons.go:234] Setting addon inspektor-gadget=true in "addons-839400"
	I0429 10:43:12.763830   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:12.763830   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:12.763830   10056 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-839400"
	I0429 10:43:12.763830   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:12.764878   10056 addons.go:234] Setting addon helm-tiller=true in "addons-839400"
	I0429 10:43:12.764878   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:12.764878   10056 addons.go:234] Setting addon ingress=true in "addons-839400"
	I0429 10:43:12.764878   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:12.762831   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:12.761844   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:12.765835   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.767872   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.769837   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.769837   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.771821   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.772828   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.772828   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.772828   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.773853   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.774906   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.775865   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.775865   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.775865   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.775865   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.778833   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:12.784195   10056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 10:43:14.192427   10056 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.4082209s)
	I0429 10:43:14.209408   10056 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.4531303s)
	I0429 10:43:14.209408   10056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.26.176.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 10:43:14.221165   10056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 10:43:16.117526   10056 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.8963456s)
	I0429 10:43:16.123166   10056 node_ready.go:35] waiting up to 6m0s for node "addons-839400" to be "Ready" ...
	I0429 10:43:16.124166   10056 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.26.176.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.9147424s)
	I0429 10:43:16.124166   10056 start.go:946] {"host.minikube.internal": 172.26.176.1} host record injected into CoreDNS's ConfigMap
	I0429 10:43:17.030183   10056 node_ready.go:49] node "addons-839400" has status "Ready":"True"
	I0429 10:43:17.030183   10056 node_ready.go:38] duration metric: took 907.0099ms for node "addons-839400" to be "Ready" ...
	I0429 10:43:17.030183   10056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	W0429 10:43:17.727805   10056 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-839400" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0429 10:43:17.727805   10056 start.go:159] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0429 10:43:17.735806   10056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8cdqb" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:18.731954   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:18.731954   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:18.746796   10056 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0429 10:43:18.734795   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:18.753800   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:18.755805   10056 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0429 10:43:18.753800   10056 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 10:43:18.758799   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 10:43:18.759797   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:18.759797   10056 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 10:43:18.759797   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0429 10:43:18.759797   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:18.876823   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:18.876823   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:18.884990   10056 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0429 10:43:18.895824   10056 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0429 10:43:18.895824   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0429 10:43:18.895824   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:18.979066   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:18.979066   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:18.989706   10056 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0429 10:43:19.010695   10056 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0429 10:43:19.010695   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0429 10:43:19.011692   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:19.409417   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:19.409417   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:19.412427   10056 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-839400"
	I0429 10:43:19.412427   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:19.413430   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:19.475006   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:19.475006   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:19.480943   10056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 10:43:19.475935   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:19.483931   10056 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 10:43:19.483931   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 10:43:19.483931   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:19.485929   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:19.485929   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:19.480943   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:19.490933   10056 out.go:177]   - Using image docker.io/registry:2.8.3
	I0429 10:43:19.493940   10056 addons.go:234] Setting addon default-storageclass=true in "addons-839400"
	I0429 10:43:19.520256   10056 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0429 10:43:19.515296   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:19.515359   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:19.524277   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:19.529264   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0429 10:43:19.526255   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:19.526255   10056 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0429 10:43:19.528001   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:19.530270   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:19.540340   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:19.540340   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:19.542266   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:19.553428   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0429 10:43:19.559695   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0429 10:43:19.555431   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:19.544464   10056 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0429 10:43:19.544464   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:19.544464   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0429 10:43:19.561411   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:19.585159   10056 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0429 10:43:19.575526   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0429 10:43:19.575526   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:19.575526   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:19.595533   10056 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0429 10:43:19.595533   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:19.628535   10056 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0429 10:43:19.638528   10056 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 10:43:19.638528   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0429 10:43:19.638528   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:19.628535   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0429 10:43:19.654533   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0429 10:43:19.631535   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0429 10:43:19.696539   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:19.707842   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0429 10:43:19.710549   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0429 10:43:19.715538   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0429 10:43:19.717521   10056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0429 10:43:19.720546   10056 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0429 10:43:19.720546   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0429 10:43:19.720546   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:19.733679   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:19.733679   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:19.838763   10056 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0429 10:43:19.805762   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:19.896357   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:19.935662   10056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0429 10:43:19.911403   10056 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0429 10:43:19.983191   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0429 10:43:19.983191   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:19.992194   10056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 10:43:19.997186   10056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 10:43:20.002188   10056 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 10:43:20.002188   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0429 10:43:20.002188   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:20.602063   10056 pod_ready.go:102] pod "coredns-7db6d8ff4d-8cdqb" in "kube-system" namespace has status "Ready":"False"
	I0429 10:43:22.917658   10056 pod_ready.go:102] pod "coredns-7db6d8ff4d-8cdqb" in "kube-system" namespace has status "Ready":"False"
	I0429 10:43:24.927236   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:24.927236   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:24.927236   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:24.941265   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:24.941265   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:24.941265   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:24.961232   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:24.961232   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:24.962230   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:24.990632   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:24.990632   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:24.990632   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:24.991628   10056 pod_ready.go:102] pod "coredns-7db6d8ff4d-8cdqb" in "kube-system" namespace has status "Ready":"False"
	I0429 10:43:25.387083   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:25.387083   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:25.387083   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:25.445154   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:25.445983   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:25.446323   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:25.450132   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:25.450132   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:25.450132   10056 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0429 10:43:25.455130   10056 out.go:177]   - Using image docker.io/busybox:stable
	I0429 10:43:25.459133   10056 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 10:43:25.459133   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0429 10:43:25.460137   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:25.485235   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:25.485235   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:25.485235   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:25.595497   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:25.595497   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:25.595497   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:25.609483   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:25.609483   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:25.609483   10056 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 10:43:25.609483   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 10:43:25.609483   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:25.717598   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:25.717598   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:25.717598   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:25.746170   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:25.746170   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:25.746170   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:25.884888   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:25.884888   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:25.884888   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:26.138998   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:26.138998   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:26.138998   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:27.003609   10056 pod_ready.go:102] pod "coredns-7db6d8ff4d-8cdqb" in "kube-system" namespace has status "Ready":"False"
	I0429 10:43:28.280966   10056 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0429 10:43:28.280966   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:29.900889   10056 pod_ready.go:92] pod "coredns-7db6d8ff4d-8cdqb" in "kube-system" namespace has status "Ready":"True"
	I0429 10:43:29.900889   10056 pod_ready.go:81] duration metric: took 12.1649844s for pod "coredns-7db6d8ff4d-8cdqb" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:29.900889   10056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bnv6h" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:30.176030   10056 pod_ready.go:92] pod "coredns-7db6d8ff4d-bnv6h" in "kube-system" namespace has status "Ready":"True"
	I0429 10:43:30.176030   10056 pod_ready.go:81] duration metric: took 275.1384ms for pod "coredns-7db6d8ff4d-bnv6h" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:30.176030   10056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-839400" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:30.326928   10056 pod_ready.go:92] pod "etcd-addons-839400" in "kube-system" namespace has status "Ready":"True"
	I0429 10:43:30.327042   10056 pod_ready.go:81] duration metric: took 151.0108ms for pod "etcd-addons-839400" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:30.327042   10056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-839400" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:30.355854   10056 pod_ready.go:92] pod "kube-apiserver-addons-839400" in "kube-system" namespace has status "Ready":"True"
	I0429 10:43:30.355854   10056 pod_ready.go:81] duration metric: took 28.8122ms for pod "kube-apiserver-addons-839400" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:30.355854   10056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-839400" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:30.396949   10056 pod_ready.go:92] pod "kube-controller-manager-addons-839400" in "kube-system" namespace has status "Ready":"True"
	I0429 10:43:30.396949   10056 pod_ready.go:81] duration metric: took 41.0944ms for pod "kube-controller-manager-addons-839400" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:30.396949   10056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2xmk" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:30.489167   10056 pod_ready.go:92] pod "kube-proxy-j2xmk" in "kube-system" namespace has status "Ready":"True"
	I0429 10:43:30.489167   10056 pod_ready.go:81] duration metric: took 92.2175ms for pod "kube-proxy-j2xmk" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:30.489167   10056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-839400" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:30.518525   10056 pod_ready.go:92] pod "kube-scheduler-addons-839400" in "kube-system" namespace has status "Ready":"True"
	I0429 10:43:30.518525   10056 pod_ready.go:81] duration metric: took 29.3578ms for pod "kube-scheduler-addons-839400" in "kube-system" namespace to be "Ready" ...
	I0429 10:43:30.518525   10056 pod_ready.go:38] duration metric: took 13.4882331s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 10:43:30.518525   10056 api_server.go:52] waiting for apiserver process to appear ...
	I0429 10:43:30.606306   10056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 10:43:30.695512   10056 api_server.go:72] duration metric: took 17.9389023s to wait for apiserver process to appear ...
	I0429 10:43:30.695512   10056 api_server.go:88] waiting for apiserver healthz status ...
	I0429 10:43:30.695704   10056 api_server.go:253] Checking apiserver healthz at https://172.26.182.147:8443/healthz ...
	I0429 10:43:30.707529   10056 api_server.go:279] https://172.26.182.147:8443/healthz returned 200:
	ok
	I0429 10:43:30.709531   10056 api_server.go:141] control plane version: v1.30.0
	I0429 10:43:30.709531   10056 api_server.go:131] duration metric: took 14.0185ms to wait for apiserver health ...
	I0429 10:43:30.709531   10056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 10:43:30.741294   10056 system_pods.go:59] 7 kube-system pods found
	I0429 10:43:30.741294   10056 system_pods.go:61] "coredns-7db6d8ff4d-8cdqb" [0b97e670-722f-453b-9b54-4941d1211255] Running
	I0429 10:43:30.741294   10056 system_pods.go:61] "coredns-7db6d8ff4d-bnv6h" [b66e7f2c-cb52-4d38-99d9-e6b6d06d242d] Running
	I0429 10:43:30.741294   10056 system_pods.go:61] "etcd-addons-839400" [e4609011-a06a-4c40-81ad-f31d1ef8a0b6] Running
	I0429 10:43:30.741294   10056 system_pods.go:61] "kube-apiserver-addons-839400" [607b4693-6d50-4a36-9fa6-280a6465031c] Running
	I0429 10:43:30.741294   10056 system_pods.go:61] "kube-controller-manager-addons-839400" [80acfc78-eda2-4902-9091-414173da37b4] Running
	I0429 10:43:30.741294   10056 system_pods.go:61] "kube-proxy-j2xmk" [35fa0c21-cf13-4337-8337-e77cd0bcc128] Running
	I0429 10:43:30.742311   10056 system_pods.go:61] "kube-scheduler-addons-839400" [36b8b552-4583-42e8-9258-7c11cc33ec8d] Running
	I0429 10:43:30.742311   10056 system_pods.go:74] duration metric: took 32.7796ms to wait for pod list to return data ...
	I0429 10:43:30.742311   10056 default_sa.go:34] waiting for default service account to be created ...
	I0429 10:43:30.770296   10056 default_sa.go:45] found service account: "default"
	I0429 10:43:30.770296   10056 default_sa.go:55] duration metric: took 27.985ms for default service account to be created ...
	I0429 10:43:30.770296   10056 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 10:43:30.956020   10056 system_pods.go:86] 7 kube-system pods found
	I0429 10:43:30.956237   10056 system_pods.go:89] "coredns-7db6d8ff4d-8cdqb" [0b97e670-722f-453b-9b54-4941d1211255] Running
	I0429 10:43:30.956280   10056 system_pods.go:89] "coredns-7db6d8ff4d-bnv6h" [b66e7f2c-cb52-4d38-99d9-e6b6d06d242d] Running
	I0429 10:43:30.956280   10056 system_pods.go:89] "etcd-addons-839400" [e4609011-a06a-4c40-81ad-f31d1ef8a0b6] Running
	I0429 10:43:30.956280   10056 system_pods.go:89] "kube-apiserver-addons-839400" [607b4693-6d50-4a36-9fa6-280a6465031c] Running
	I0429 10:43:30.956280   10056 system_pods.go:89] "kube-controller-manager-addons-839400" [80acfc78-eda2-4902-9091-414173da37b4] Running
	I0429 10:43:30.956280   10056 system_pods.go:89] "kube-proxy-j2xmk" [35fa0c21-cf13-4337-8337-e77cd0bcc128] Running
	I0429 10:43:30.956280   10056 system_pods.go:89] "kube-scheduler-addons-839400" [36b8b552-4583-42e8-9258-7c11cc33ec8d] Running
	I0429 10:43:30.956280   10056 system_pods.go:126] duration metric: took 185.982ms to wait for k8s-apps to be running ...
	I0429 10:43:30.956280   10056 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 10:43:30.998094   10056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 10:43:31.065532   10056 system_svc.go:56] duration metric: took 109.2515ms WaitForService to wait for kubelet
	I0429 10:43:31.065532   10056 kubeadm.go:576] duration metric: took 18.3089189s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 10:43:31.065532   10056 node_conditions.go:102] verifying NodePressure condition ...
	I0429 10:43:31.148063   10056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 10:43:31.148063   10056 node_conditions.go:123] node cpu capacity is 2
	I0429 10:43:31.148063   10056 node_conditions.go:105] duration metric: took 82.5299ms to run NodePressure ...
	I0429 10:43:31.148063   10056 start.go:240] waiting for startup goroutines ...
	I0429 10:43:31.274062   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:31.274062   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:31.274062   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:31.695341   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:31.695439   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:31.695439   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:31.771123   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:31.771123   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:31.772960   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:31.961088   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:31.961088   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:31.962551   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:32.068834   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:32.068950   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:32.070547   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:32.128215   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:32.129177   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:32.129808   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:32.195903   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0429 10:43:32.316642   10056 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0429 10:43:32.316642   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0429 10:43:32.328481   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:32.328481   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:32.329471   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:32.414864   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:32.415194   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:32.416283   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:32.496969   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:32.496969   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:32.498121   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:32.529315   10056 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 10:43:32.529315   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0429 10:43:32.590111   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:32.590111   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:32.591463   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:32.643914   10056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 10:43:32.643914   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0429 10:43:32.652926   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 10:43:32.688191   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:32.688298   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:32.688452   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:32.754373   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:32.754373   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:32.755482   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:32.817905   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:32.817905   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:32.818916   10056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 10:43:32.818916   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 10:43:32.818916   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:32.827923   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 10:43:32.874582   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:32.874582   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:32.875410   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:32.929600   10056 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0429 10:43:32.929600   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0429 10:43:32.999362   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 10:43:33.077060   10056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 10:43:33.077143   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 10:43:33.130412   10056 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0429 10:43:33.130526   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0429 10:43:33.217040   10056 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0429 10:43:33.217040   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0429 10:43:33.233727   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 10:43:33.243817   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 10:43:33.300620   10056 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0429 10:43:33.300620   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0429 10:43:33.321970   10056 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0429 10:43:33.321970   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0429 10:43:33.426924   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 10:43:33.441040   10056 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0429 10:43:33.441040   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0429 10:43:33.526204   10056 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0429 10:43:33.526204   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0429 10:43:33.604812   10056 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0429 10:43:33.604812   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0429 10:43:33.619095   10056 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0429 10:43:33.619095   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0429 10:43:33.677987   10056 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0429 10:43:33.678092   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0429 10:43:33.689344   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:33.689344   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:33.689906   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:33.823428   10056 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0429 10:43:33.823584   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0429 10:43:33.903570   10056 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0429 10:43:33.903570   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0429 10:43:33.954604   10056 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0429 10:43:33.954604   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0429 10:43:33.976836   10056 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0429 10:43:33.976836   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0429 10:43:33.977834   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0429 10:43:34.202837   10056 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 10:43:34.202837   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0429 10:43:34.233836   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.0379158s)
	I0429 10:43:34.376020   10056 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0429 10:43:34.376020   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0429 10:43:34.421979   10056 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0429 10:43:34.422031   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0429 10:43:34.428837   10056 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0429 10:43:34.428837   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0429 10:43:34.637050   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:34.637124   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:34.637780   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:34.709053   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 10:43:34.785117   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:34.785117   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:34.786520   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:34.864195   10056 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0429 10:43:34.864251   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0429 10:43:34.915428   10056 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0429 10:43:34.915428   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0429 10:43:34.958885   10056 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0429 10:43:34.958885   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0429 10:43:35.255269   10056 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0429 10:43:35.255269   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0429 10:43:35.343062   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0429 10:43:35.530125   10056 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0429 10:43:35.530196   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0429 10:43:35.586324   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 10:43:35.631849   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.9788988s)
	I0429 10:43:35.761738   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 10:43:35.824147   10056 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0429 10:43:35.824222   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0429 10:43:36.215970   10056 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 10:43:36.216078   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0429 10:43:36.329176   10056 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0429 10:43:36.329176   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0429 10:43:36.468698   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:36.468698   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:36.469088   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:36.909880   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 10:43:37.368813   10056 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0429 10:43:37.368813   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0429 10:43:37.675782   10056 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 10:43:37.675972   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0429 10:43:37.715636   10056 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0429 10:43:37.783200   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.9546493s)
	I0429 10:43:38.456446   10056 addons.go:234] Setting addon gcp-auth=true in "addons-839400"
	I0429 10:43:38.456628   10056 host.go:66] Checking if "addons-839400" exists ...
	I0429 10:43:38.495206   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:38.498056   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 10:43:39.419900   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.4204855s)
	I0429 10:43:40.837828   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:40.837873   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:40.852430   10056 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0429 10:43:40.852430   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839400 ).state
	I0429 10:43:43.077271   10056 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 10:43:43.077271   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:43.077271   10056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839400 ).networkadapters[0]).ipaddresses[0]
	I0429 10:43:45.791469   10056 main.go:141] libmachine: [stdout =====>] : 172.26.182.147
	
	I0429 10:43:45.791979   10056 main.go:141] libmachine: [stderr =====>] : 
	I0429 10:43:45.792410   10056 sshutil.go:53] new ssh client: &{IP:172.26.182.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-839400\id_rsa Username:docker}
	I0429 10:43:45.836303   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.6023895s)
	I0429 10:43:45.836368   10056 addons.go:470] Verifying addon ingress=true in "addons-839400"
	I0429 10:43:45.836303   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (12.5923844s)
	I0429 10:43:45.836368   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.4093441s)
	I0429 10:43:45.836368   10056 addons.go:470] Verifying addon metrics-server=true in "addons-839400"
	I0429 10:43:45.839230   10056 out.go:177] * Verifying ingress addon...
	I0429 10:43:45.836368   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.8584381s)
	I0429 10:43:45.839230   10056 addons.go:470] Verifying addon registry=true in "addons-839400"
	I0429 10:43:45.836368   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.4932216s)
	I0429 10:43:45.836368   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.2499616s)
	I0429 10:43:45.836368   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.0745489s)
	I0429 10:43:45.837010   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.9269695s)
	I0429 10:43:45.836368   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.127225s)
	I0429 10:43:45.845635   10056 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0429 10:43:45.845635   10056 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 10:43:45.845635   10056 out.go:177] * Verifying registry addon...
	I0429 10:43:45.849599   10056 retry.go:31] will retry after 128.675624ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 10:43:45.849599   10056 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-839400 service yakd-dashboard -n yakd-dashboard
	
	I0429 10:43:45.854597   10056 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0429 10:43:45.867133   10056 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0429 10:43:45.867133   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:45.879303   10056 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 10:43:45.879303   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0429 10:43:45.886107   10056 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0429 10:43:46.000070   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 10:43:46.356096   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:46.363893   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:46.864869   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:46.865712   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:47.365139   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:47.385138   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:47.866449   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:47.877151   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:48.343424   10056 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.4908841s)
	I0429 10:43:48.344155   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.8459235s)
	I0429 10:43:48.346929   10056 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0429 10:43:48.347059   10056 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-839400"
	I0429 10:43:48.352506   10056 out.go:177] * Verifying csi-hostpath-driver addon...
	I0429 10:43:48.352506   10056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 10:43:48.358160   10056 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0429 10:43:48.358160   10056 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0429 10:43:48.358709   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0429 10:43:48.360794   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:48.404648   10056 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 10:43:48.404648   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:48.408175   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:48.549306   10056 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0429 10:43:48.549341   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0429 10:43:48.735348   10056 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 10:43:48.735348   10056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0429 10:43:48.800917   10056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 10:43:48.859971   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:48.872516   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:48.892324   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:48.901744   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.9015829s)
	I0429 10:43:49.361308   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:49.368997   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:49.373328   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:49.870967   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:49.880155   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:49.885750   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:50.394132   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:50.441845   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:50.443883   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:50.505263   10056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.7043321s)
	I0429 10:43:50.513626   10056 addons.go:470] Verifying addon gcp-auth=true in "addons-839400"
	I0429 10:43:50.518631   10056 out.go:177] * Verifying gcp-auth addon...
	I0429 10:43:50.522637   10056 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0429 10:43:50.558309   10056 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0429 10:43:50.558373   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:50.859773   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:50.866282   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:50.869063   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:51.035245   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:51.371598   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:51.371856   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:51.376757   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:51.544814   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:51.861768   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:51.869726   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:51.870461   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:52.030368   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:52.377081   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:52.377981   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:52.377981   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:52.527778   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:52.861342   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:52.872889   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:52.876926   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:53.337191   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:53.361558   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:53.382586   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:53.383556   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:53.535438   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:53.865139   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:53.867558   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:53.873202   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:54.033687   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:54.373055   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:54.373270   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:54.373270   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:54.537489   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:54.867059   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:54.882047   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:54.882892   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:55.035459   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:55.369566   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:55.371183   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:55.391162   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:55.533951   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:55.867094   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:55.867788   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:55.875323   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:56.036171   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:56.373650   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:56.384994   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:56.388773   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:56.544330   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:56.860686   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:56.878157   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:56.880160   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:57.033576   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:57.370942   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:57.371945   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:57.372950   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:57.540241   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:57.860990   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:57.866140   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:57.871290   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:58.033625   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:58.368626   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:58.372297   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:58.375145   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:58.539002   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:58.856379   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:58.866798   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:58.875202   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:59.029658   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:59.380138   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:43:59.380342   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:59.387218   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:59.546203   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:43:59.857985   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:43:59.863912   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:43:59.871551   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:00.033728   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:00.370752   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:00.373092   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:00.374875   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:00.541228   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:01.123489   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:01.637535   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:01.637535   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:01.639473   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:01.650911   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:01.661971   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:01.663165   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:01.666627   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:01.869636   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:01.872681   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:01.875975   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:02.041616   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:02.358218   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:02.366737   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:02.372308   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:02.531737   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:02.867872   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:02.867872   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:02.870885   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:03.034500   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:03.370209   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:03.376803   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:03.383764   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:03.542761   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:03.857669   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:03.861629   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:03.867301   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:04.030297   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:04.366430   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:04.367110   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:04.374637   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:04.536639   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:04.873583   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:04.873708   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:04.876170   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:05.043170   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:05.360454   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:05.367306   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:05.370044   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:05.533023   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:05.870036   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:05.870036   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:05.873034   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:06.039239   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:06.358119   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:06.363075   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:06.368724   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:06.532504   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:06.867516   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:06.869529   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:06.869529   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:07.040560   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:07.359881   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:07.365972   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:07.372524   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:07.531975   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:07.878703   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:07.878703   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:07.880476   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:08.041797   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:08.359782   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:08.370481   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:08.371493   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:08.533948   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:08.855554   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:08.878561   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:08.883179   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:09.041763   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:09.463367   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:09.464561   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:09.464771   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:09.929824   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:09.930810   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:09.930810   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:09.932495   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:10.031937   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:10.372847   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:10.372902   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:10.378100   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:10.528037   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:10.859376   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:10.868527   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:10.871383   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:11.033370   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:11.762499   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:11.766080   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:11.766729   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:11.767928   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:11.917665   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:11.917924   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:11.921465   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:12.664480   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:12.674432   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:12.682025   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:12.682025   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:12.682305   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:12.874179   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:12.878765   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:12.881661   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:13.036169   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:13.374337   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:13.374337   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:13.376331   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:13.538320   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:13.865558   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:13.866733   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:13.871545   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:14.038128   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:14.355768   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:14.381829   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:14.382719   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:14.530421   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:14.865380   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:14.869084   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:14.876312   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:15.035587   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:15.357453   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:15.361456   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:15.370379   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:15.546149   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:15.865812   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:15.865812   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:15.868766   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:16.035291   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:16.372952   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:16.373283   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:16.375568   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:16.541042   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:16.860226   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:16.866618   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:16.869929   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:17.035216   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:17.376724   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:17.401392   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:17.401690   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:17.552767   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:17.865628   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:17.874146   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:17.875322   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:18.037847   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:18.373925   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:18.376887   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:18.378336   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:18.528630   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:18.870242   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:18.874226   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:18.874968   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:19.038085   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:19.358581   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:19.366657   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:19.371663   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:19.534000   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:19.870010   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:19.870280   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:19.873872   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:20.041879   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:20.364092   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:20.364092   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:20.371289   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:20.536002   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:20.872728   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:20.878583   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:20.878583   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:21.041501   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:21.360223   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:21.367803   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:21.368593   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:21.532663   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:21.867247   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:21.870959   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:21.874006   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:22.037712   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:22.370781   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:22.372587   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:22.378637   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:22.528908   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:22.868625   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:22.873808   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:22.876483   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:23.037215   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:23.361377   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:23.366435   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:23.371594   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:23.530389   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:23.874001   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:23.876110   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:23.878090   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:24.039738   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:24.360304   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:24.366459   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:24.369614   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:24.532419   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:24.872088   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:24.873104   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:24.877794   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:25.040047   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:25.359062   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:25.370932   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:25.375227   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:25.532234   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:25.862694   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:25.862901   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:25.868348   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:26.034177   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:26.369334   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:26.369334   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:26.373199   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:26.543005   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:26.863002   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:26.871810   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:26.872869   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:27.034722   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:27.373355   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:27.374166   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:27.379325   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:27.543479   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:27.862983   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:27.863848   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:27.870288   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:28.056515   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:28.715879   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:28.719590   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:28.722863   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:28.724078   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:28.870212   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:28.870769   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:28.874730   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:29.041480   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:29.360177   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:29.366995   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:29.369291   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:29.531283   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:29.868173   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:29.871803   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:29.871803   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:30.038877   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:30.359705   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:30.371350   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:30.371350   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:30.530591   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:30.861488   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:30.869486   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:30.870481   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:31.034545   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:31.372674   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:31.380676   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:31.381446   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:31.542203   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:31.862364   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:31.869988   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:31.872975   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:32.036539   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:33.220085   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:33.220085   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:33.221069   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:33.223040   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:34.392725   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:34.396592   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:34.399529   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:34.402532   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:34.403809   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:34.406143   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:34.411150   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:34.419724   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:34.540038   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:34.858033   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:34.865466   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:34.871999   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:35.033079   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:35.371836   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:35.372699   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:35.378373   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:35.539438   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:35.869145   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:35.871743   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:35.876001   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:36.051109   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:36.368746   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:36.370102   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:36.378281   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:36.538388   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:36.874083   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:36.874083   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:36.875093   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:37.029130   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:37.371088   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:37.371274   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:37.376936   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:37.537533   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:37.874271   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:37.874984   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:37.877589   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:38.029229   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:38.370232   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:38.370671   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:38.375398   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:38.546209   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:38.859215   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:38.875595   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:38.877590   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:39.032280   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:39.371603   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:39.371831   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:39.374227   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:39.765105   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:39.872137   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:39.872137   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:39.872137   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:40.043842   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:40.375795   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:40.376371   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:40.378635   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:40.530232   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:40.867366   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:40.870240   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:40.871237   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:41.036907   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:41.378252   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:41.378252   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:41.379332   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:41.534182   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:41.877652   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:41.877824   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:41.881555   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:42.046563   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:42.364816   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:42.381816   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:42.382608   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:42.536253   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:42.869367   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:42.871137   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:42.872770   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:43.041813   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:43.372216   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:43.372827   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:43.373042   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:43.535231   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:43.874649   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:43.875696   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:43.876547   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:44.043092   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:44.363636   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:44.364667   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:44.371446   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:44.534724   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:44.885277   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:44.885396   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:44.889026   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:45.614841   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:45.615877   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:45.616842   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:45.619847   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:45.619847   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:45.861460   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:45.875004   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:45.883996   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:46.028804   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:46.378677   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:46.383816   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:46.386953   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:46.542556   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:46.879817   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:46.880284   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:46.880340   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:47.041620   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:47.358170   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:47.362047   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:47.368033   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:47.532286   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:47.872464   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:47.873513   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:47.876459   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:48.042534   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:48.630775   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:48.633439   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:48.634685   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:48.634685   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:48.872169   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:48.878320   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:48.884133   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:49.039478   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:49.386405   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:49.396825   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:49.405003   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:49.554758   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:49.893123   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:49.916575   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:49.919253   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:50.032276   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:50.396517   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:50.396517   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:50.397079   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:50.540009   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:50.882320   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:50.883444   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:50.886732   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:51.044992   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:51.359254   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:51.370844   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:51.372998   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:51.533864   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:51.875383   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:51.875701   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:51.881435   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:52.040159   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:52.358416   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:52.367731   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:52.370756   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:52.531833   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:52.870551   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:52.872918   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:52.873553   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:53.046465   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:53.360970   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:53.365785   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:53.367721   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:53.532499   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:53.869757   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:53.874006   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:53.875289   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:54.039660   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:54.358116   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:54.369796   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:54.377166   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:54.529815   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:54.862365   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:54.869659   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:54.872505   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:55.032955   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:55.365051   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:55.365866   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:55.375141   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:55.538124   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:55.875083   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:55.876447   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:55.876447   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:56.044244   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:56.359553   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:56.366221   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:56.375597   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:56.531238   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:56.873929   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:56.874320   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:56.876925   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:57.040732   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:57.360987   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:57.369292   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:57.370097   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:57.531825   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:57.875604   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:57.875692   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:57.879627   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:58.045951   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:58.408496   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:58.409470   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:58.413542   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:58.530042   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:58.869167   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:58.869849   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:58.870114   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:59.041334   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:59.360230   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:59.365268   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:59.372293   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:44:59.533591   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:44:59.872243   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:44:59.875915   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:44:59.881759   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:00.053571   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:00.363036   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:00.368741   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:45:00.375005   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:00.535548   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:01.588096   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:01.588711   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:45:01.589931   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:01.590619   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:01.597284   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:45:01.598389   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:01.599058   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:01.601118   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:02.474005   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:02.474361   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:02.474832   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:45:02.477658   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:02.491566   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:45:02.495545   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:02.496622   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:02.538274   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:02.874667   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:45:02.876425   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:02.878130   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:03.047810   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:03.357747   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:03.362782   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 10:45:03.369873   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:03.535113   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:03.861292   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:03.866572   10056 kapi.go:107] duration metric: took 1m18.0113511s to wait for kubernetes.io/minikube-addons=registry ...
	I0429 10:45:03.869963   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:04.099292   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:04.370483   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:04.374659   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:04.542022   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:04.859080   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:04.867674   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:05.033186   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:05.370942   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:05.373696   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:05.538794   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:05.860753   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:05.867513   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:06.034937   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:06.368563   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:06.371542   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:06.538480   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:06.872822   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:06.873642   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:07.041780   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:07.361298   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:07.369917   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:07.534417   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:07.873608   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:07.873765   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:08.065774   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:08.617948   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:08.621255   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:08.621295   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:08.866214   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:08.869494   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:09.036509   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:09.372303   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:09.373763   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:09.542943   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:09.861480   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:09.867197   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:10.035462   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:10.374381   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:10.374381   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:10.529156   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:10.862103   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:10.869781   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:11.034516   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:11.377094   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:11.382493   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:11.543084   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:11.868543   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:11.873414   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:12.035517   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:12.368428   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:12.372815   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:12.538859   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:12.872501   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:12.874571   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:13.040521   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:13.809389   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:13.810393   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:13.813483   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:14.301411   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:14.302412   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:14.308331   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:14.417756   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:14.417840   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:14.539694   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:14.858036   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:14.894462   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:15.034068   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:15.379628   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:15.379628   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:15.549744   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:15.862127   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:15.874705   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:16.034752   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:16.384792   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:16.385324   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:16.660857   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:16.868603   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:16.868603   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:17.046378   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:17.357407   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:17.377607   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:17.529314   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:17.869059   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:17.871279   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:18.040704   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:18.361372   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:18.367315   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:18.534901   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:18.871274   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:18.876235   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:19.032601   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:19.369264   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:19.370240   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:19.542190   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:19.859045   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:19.865787   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:20.032396   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:20.371192   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:20.371192   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:20.539990   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:20.857685   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:20.874207   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:21.030236   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:21.371388   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:21.372152   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:21.540373   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:21.859079   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:21.865816   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:22.033431   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:22.372361   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:22.373416   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:22.543570   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:22.863220   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:22.869872   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:23.037102   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:23.360349   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:23.370176   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:23.530715   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:23.863995   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:23.871077   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:24.032252   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:24.364279   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:24.370133   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:24.533147   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:25.068582   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:25.108347   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:25.114805   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:25.372176   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:25.380053   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:25.533756   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:25.866857   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:25.871686   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:26.038357   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:26.360186   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:26.368603   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:26.532450   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:26.869296   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:26.870187   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:27.038279   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:27.374685   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:27.375026   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:27.543035   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:27.863012   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:27.872055   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:28.035069   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:28.372854   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:28.376369   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:28.529777   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:28.862724   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:28.871756   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:29.374292   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:29.531694   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:29.538625   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:29.540834   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:29.879719   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:29.879719   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:30.054472   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:30.364015   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:30.372033   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:30.533919   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:30.868506   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:30.868780   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:31.040436   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:31.360766   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:31.376101   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:31.538867   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:31.878390   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:31.878441   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:32.040356   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:32.358862   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:32.372462   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:32.531722   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:32.863931   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:32.869557   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:33.036857   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:33.372868   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:33.375491   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:33.545428   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:33.864606   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:33.869589   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:34.038866   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:34.474656   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:34.475083   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:34.532311   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:34.874380   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:34.875969   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:35.042372   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:35.358563   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:35.375640   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:35.534020   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:35.871769   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:35.871769   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:36.036183   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:36.377831   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:36.381292   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:36.530623   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:36.869199   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:36.873182   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:37.044152   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:37.365205   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:37.376984   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:37.677519   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:37.858215   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:37.866365   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:38.048673   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:38.362498   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:38.368246   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:38.535575   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:38.892096   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:38.893096   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:39.045094   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:39.362532   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:39.368715   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:39.536597   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:39.869639   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:39.876623   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:40.209406   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:40.469540   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:40.470075   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:40.541114   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:40.876002   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:40.876002   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:41.055388   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:41.364793   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:41.372775   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:41.533378   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:41.873579   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:41.940299   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:42.047661   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:42.358785   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:42.374579   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:42.531494   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:42.870894   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:42.873654   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:43.041550   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:43.361846   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:43.367611   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:43.535697   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:43.970773   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:43.973228   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:44.135523   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:44.361810   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:44.394669   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:44.531988   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:44.861822   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:44.868072   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:45.037754   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:45.383956   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:45.383956   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:45.531390   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:45.864646   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:45.870158   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:46.037088   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:46.374299   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:46.374299   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:46.532475   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:47.425877   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:47.425877   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:47.430962   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:47.432941   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:47.454084   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:47.534471   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:47.864718   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:47.871175   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:48.054158   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:48.373929   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:48.378694   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:48.543500   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:48.857516   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:48.888180   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:49.032512   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:49.369763   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:49.371768   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:49.769933   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:49.861768   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:49.868941   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:50.036259   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:50.377026   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:50.378728   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:50.529700   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:50.865646   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:50.868660   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:51.038667   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:51.360604   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:51.367418   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:51.533976   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:51.867697   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:51.868616   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:52.054985   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:52.373835   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:52.387822   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:52.541513   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:52.879786   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:52.880378   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:53.033210   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:53.370755   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:53.375154   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:53.536589   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:53.873183   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:53.876179   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:54.043223   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:54.362802   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:54.367278   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:54.539463   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:55.025349   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:55.027982   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:55.032117   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:55.374573   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:55.374573   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:55.531065   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:55.862844   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:55.869799   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:56.040568   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:56.360511   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:56.370927   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:56.531768   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:57.026669   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:57.027849   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:57.030542   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:57.358695   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:57.373689   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:57.531574   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:57.870458   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:57.872840   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:58.038248   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:58.371476   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:58.374107   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:58.542114   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:58.860168   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:58.867147   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:59.033368   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:59.362439   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:59.369962   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:45:59.534095   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:45:59.866937   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:45:59.874006   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:00.039081   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:00.369370   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:00.375613   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:00.539543   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:00.874822   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:00.875248   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:01.045927   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:01.360734   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:01.371007   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:01.532275   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:01.868091   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:01.868091   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:02.038181   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:02.358282   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:02.375654   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:02.532462   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:02.865081   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:02.871339   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:03.036286   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:03.380448   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:03.381625   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:03.543227   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:03.859797   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:03.869414   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:04.031219   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:04.368471   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:04.369100   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:04.536304   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:04.877777   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:04.880546   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:05.045665   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:05.360878   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:05.367215   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:05.532958   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:05.865311   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:05.868309   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:06.039534   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:06.374822   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:06.376713   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:06.550485   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:06.974246   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:06.974306   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:07.030140   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:07.366996   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:07.377794   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:07.539674   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:07.872799   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:07.877093   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 10:46:08.044116   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:08.364692   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:08.393164   10056 kapi.go:107] duration metric: took 2m20.0337015s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0429 10:46:08.536339   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:08.868839   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:09.053934   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:09.360321   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:09.533954   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:09.864179   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:10.039294   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:10.406733   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:10.530891   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:10.865042   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:11.063094   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:11.359054   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:11.534418   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:11.871761   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:12.043397   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:12.361569   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:12.534263   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:12.872863   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:13.042042   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:13.358346   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:13.532152   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:13.870619   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:14.043485   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:14.751719   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:14.752788   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:14.960463   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:15.036842   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:15.370787   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:15.568520   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:15.862157   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:16.038074   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:16.370061   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:16.541737   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:16.860816   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:17.044457   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:17.363328   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:17.532806   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:17.869609   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:18.043270   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:18.361693   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:18.534940   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:18.868027   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:19.042041   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:19.359455   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:19.533238   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:19.870526   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:20.045281   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:20.361597   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:20.538115   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:21.374580   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:21.375034   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:21.385020   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:22.216570   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:22.217265   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:22.222496   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:22.386489   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:22.547036   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:22.875773   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:23.044324   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:23.364443   10056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 10:46:23.545627   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:23.861814   10056 kapi.go:107] duration metric: took 2m38.0149148s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0429 10:46:24.036832   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:24.544142   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:25.307860   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:25.529503   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:26.036245   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:26.532832   10056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 10:46:27.037647   10056 kapi.go:107] duration metric: took 2m36.5137575s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0429 10:46:27.043541   10056 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-839400 cluster.
	I0429 10:46:27.051613   10056 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0429 10:46:27.054781   10056 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0429 10:46:27.060806   10056 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, helm-tiller, storage-provisioner, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0429 10:46:27.063351   10056 addons.go:505] duration metric: took 3m14.3054533s for enable addons: enabled=[cloud-spanner ingress-dns helm-tiller storage-provisioner nvidia-device-plugin metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0429 10:46:27.063875   10056 start.go:245] waiting for cluster config update ...
	I0429 10:46:27.063875   10056 start.go:254] writing updated cluster config ...
	I0429 10:46:27.077892   10056 ssh_runner.go:195] Run: rm -f paused
	I0429 10:46:27.344909   10056 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 10:46:27.352211   10056 out.go:177] * Done! kubectl is now configured to use "addons-839400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 29 10:47:02 addons-839400 dockerd[1333]: time="2024-04-29T10:47:02.750988119Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 10:47:02 addons-839400 dockerd[1327]: time="2024-04-29T10:47:02.867237994Z" level=info msg="ignoring event" container=295c530e9582157284a1b5048196ec87dd332886e502d88fcf5b3c02cf271ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 10:47:02 addons-839400 dockerd[1333]: time="2024-04-29T10:47:02.867711597Z" level=info msg="shim disconnected" id=295c530e9582157284a1b5048196ec87dd332886e502d88fcf5b3c02cf271ce7 namespace=moby
	Apr 29 10:47:02 addons-839400 dockerd[1333]: time="2024-04-29T10:47:02.867976398Z" level=warning msg="cleaning up after shim disconnected" id=295c530e9582157284a1b5048196ec87dd332886e502d88fcf5b3c02cf271ce7 namespace=moby
	Apr 29 10:47:02 addons-839400 dockerd[1333]: time="2024-04-29T10:47:02.868152299Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 10:47:03 addons-839400 dockerd[1333]: time="2024-04-29T10:47:03.151553087Z" level=info msg="shim disconnected" id=5989f5cc016eec71747b9de12aa51c72b458d306b2c42352a787bc9ab1ad70fc namespace=moby
	Apr 29 10:47:03 addons-839400 dockerd[1333]: time="2024-04-29T10:47:03.151766285Z" level=warning msg="cleaning up after shim disconnected" id=5989f5cc016eec71747b9de12aa51c72b458d306b2c42352a787bc9ab1ad70fc namespace=moby
	Apr 29 10:47:03 addons-839400 dockerd[1333]: time="2024-04-29T10:47:03.151805385Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 10:47:03 addons-839400 dockerd[1327]: time="2024-04-29T10:47:03.152230381Z" level=info msg="ignoring event" container=5989f5cc016eec71747b9de12aa51c72b458d306b2c42352a787bc9ab1ad70fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 10:47:08 addons-839400 dockerd[1333]: time="2024-04-29T10:47:08.254869679Z" level=info msg="shim disconnected" id=85669681a569c3419b141400f5a4c2a165d8dbea5c837cea7d3ffa20aa2175f2 namespace=moby
	Apr 29 10:47:08 addons-839400 dockerd[1333]: time="2024-04-29T10:47:08.255073779Z" level=warning msg="cleaning up after shim disconnected" id=85669681a569c3419b141400f5a4c2a165d8dbea5c837cea7d3ffa20aa2175f2 namespace=moby
	Apr 29 10:47:08 addons-839400 dockerd[1333]: time="2024-04-29T10:47:08.255104379Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 10:47:08 addons-839400 dockerd[1327]: time="2024-04-29T10:47:08.256209680Z" level=info msg="ignoring event" container=85669681a569c3419b141400f5a4c2a165d8dbea5c837cea7d3ffa20aa2175f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 10:47:08 addons-839400 dockerd[1327]: time="2024-04-29T10:47:08.516875778Z" level=info msg="ignoring event" container=59fec16ba04496c9e53e6fee127418d7fde2327bf2fe48a9be64544e47d67268 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 10:47:08 addons-839400 dockerd[1333]: time="2024-04-29T10:47:08.518082179Z" level=info msg="shim disconnected" id=59fec16ba04496c9e53e6fee127418d7fde2327bf2fe48a9be64544e47d67268 namespace=moby
	Apr 29 10:47:08 addons-839400 dockerd[1333]: time="2024-04-29T10:47:08.518142179Z" level=warning msg="cleaning up after shim disconnected" id=59fec16ba04496c9e53e6fee127418d7fde2327bf2fe48a9be64544e47d67268 namespace=moby
	Apr 29 10:47:08 addons-839400 dockerd[1333]: time="2024-04-29T10:47:08.518154679Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 10:47:10 addons-839400 dockerd[1327]: time="2024-04-29T10:47:10.378186604Z" level=info msg="ignoring event" container=06e7af0f1f1521be663ca3a57a02fb2a0aef92b738cb50d40c4e9062e0dc207c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 10:47:10 addons-839400 dockerd[1333]: time="2024-04-29T10:47:10.378667099Z" level=info msg="shim disconnected" id=06e7af0f1f1521be663ca3a57a02fb2a0aef92b738cb50d40c4e9062e0dc207c namespace=moby
	Apr 29 10:47:10 addons-839400 dockerd[1333]: time="2024-04-29T10:47:10.378722698Z" level=warning msg="cleaning up after shim disconnected" id=06e7af0f1f1521be663ca3a57a02fb2a0aef92b738cb50d40c4e9062e0dc207c namespace=moby
	Apr 29 10:47:10 addons-839400 dockerd[1333]: time="2024-04-29T10:47:10.378733398Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 10:47:10 addons-839400 dockerd[1327]: time="2024-04-29T10:47:10.785443244Z" level=info msg="ignoring event" container=8628c24052d30b6ba2c05d29690f8e3ddba3ff573442405bfb1607d3bb6a3853 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 10:47:10 addons-839400 dockerd[1333]: time="2024-04-29T10:47:10.787649122Z" level=info msg="shim disconnected" id=8628c24052d30b6ba2c05d29690f8e3ddba3ff573442405bfb1607d3bb6a3853 namespace=moby
	Apr 29 10:47:10 addons-839400 dockerd[1333]: time="2024-04-29T10:47:10.787889920Z" level=warning msg="cleaning up after shim disconnected" id=8628c24052d30b6ba2c05d29690f8e3ddba3ff573442405bfb1607d3bb6a3853 namespace=moby
	Apr 29 10:47:10 addons-839400 dockerd[1333]: time="2024-04-29T10:47:10.788029418Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	7b6f256862296       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:abef4926f3e6f0aa50c968aa954f990a6b0178e04a955293a49d96810c43d0e1                            24 seconds ago       Exited              gadget                                   3                   8f62fe3cb1c22       gadget-rvgjq
	49a8b6fc44346       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 About a minute ago   Running             gcp-auth                                 0                   60228307768bc       gcp-auth-5db96cd9b4-fs8kt
	d48ad6de67172       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             About a minute ago   Running             controller                               0                   018a655165837       ingress-nginx-controller-768f948f8f-hl5r2
	d38abc73f394e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   64d884768b392       csi-hostpathplugin-jnpxf
	7b07e27412226       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   64d884768b392       csi-hostpathplugin-jnpxf
	9548c947eebee       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   64d884768b392       csi-hostpathplugin-jnpxf
	09d433327dabf       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   64d884768b392       csi-hostpathplugin-jnpxf
	82f879a5fcb2d       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   64d884768b392       csi-hostpathplugin-jnpxf
	e6bb1ab9d0836       684c5ea3b61b2                                                                                                                                About a minute ago   Exited              patch                                    2                   77518f64eedd4       ingress-nginx-admission-patch-4vbct
	3ab82dab4415b       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   51f0a863b653e       csi-hostpath-resizer-0
	61d79bc89a74c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   64d884768b392       csi-hostpathplugin-jnpxf
	cbd86ceb1bbc5       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   a8294bdb13f4f       csi-hostpath-attacher-0
	093df4a0cb375       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   About a minute ago   Exited              create                                   0                   e5c338098d53c       ingress-nginx-admission-create-b8dxs
	db765b822a599       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       About a minute ago   Running             local-path-provisioner                   0                   db05f593a3571       local-path-provisioner-8d985888d-2g9jh
	a5cd5e0732c6d       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   42f3b421f5386       snapshot-controller-745499f584-g8zvx
	44523cdeef664       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   d92f69c9aed79       snapshot-controller-745499f584-f9q5v
	b379805f9d378       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   8b74f2666f1a0       yakd-dashboard-5ddbf7d777-vzlb4
	0e5d77e54dc5f       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  2 minutes ago        Running             tiller                                   0                   16065b77545f7       tiller-deploy-6677d64bcd-pgj25
	098135893dd3d       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   6f6f11fc76f0e       kube-ingress-dns-minikube
	5233ff586b719       nvcr.io/nvidia/k8s-device-plugin@sha256:1aff0e9f0759758f87cb158d78241472af3a76cdc631f01ab395f997fa80f707                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   f1b7d055a35b8       nvidia-device-plugin-daemonset-fp9v2
	33e12d0919540       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   4eca720337ab2       storage-provisioner
	e9c68b34496e5       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   087eb8093847d       coredns-7db6d8ff4d-8cdqb
	aaee328be282b       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   8c7059ef94a42       coredns-7db6d8ff4d-bnv6h
	d028e594e2706       a0bf559e280cf                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   86205074a70d0       kube-proxy-j2xmk
	1395ae371650f       c7aad43836fa5                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   e503ce6cd4d3f       kube-controller-manager-addons-839400
	1ab6771e6c59f       3861cfcd7c04c                                                                                                                                4 minutes ago        Running             etcd                                     0                   9d35359ab31fb       etcd-addons-839400
	79fd60fee7a81       259c8277fcbbc                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   83d82c3c9c93f       kube-scheduler-addons-839400
	d3c75bce2bc18       c42f13656d0b2                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   4aa1fc3f45544       kube-apiserver-addons-839400
	
	
	==> controller_ingress [d48ad6de6717] <==
	W0429 10:46:22.632210       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0429 10:46:22.632465       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0429 10:46:22.641340       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.0" state="clean" commit="7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a" platform="linux/amd64"
	I0429 10:46:22.876728       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0429 10:46:22.912571       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0429 10:46:22.924984       7 nginx.go:264] "Starting NGINX Ingress controller"
	I0429 10:46:22.938192       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"373dfefe-05ca-4524-95b7-24f42863dc23", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0429 10:46:22.946887       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"3633dc94-7e1d-4d0b-9c9c-82fd43e537a8", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0429 10:46:22.947162       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"6a51659e-11cd-4aec-b6f7-37e04f21560b", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0429 10:46:24.127791       7 nginx.go:307] "Starting NGINX process"
	I0429 10:46:24.128118       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0429 10:46:24.128229       7 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0429 10:46:24.128697       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0429 10:46:24.165683       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0429 10:46:24.165925       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-hl5r2"
	I0429 10:46:24.173292       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-hl5r2" node="addons-839400"
	I0429 10:46:24.209915       7 controller.go:210] "Backend successfully reloaded"
	I0429 10:46:24.209975       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0429 10:46:24.210406       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-hl5r2", UID:"a12706f6-81a9-44fb-9562-8cba037324a3", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [aaee328be282] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ad034cdec630ea896b94a48e8befd9caaf201b38d8a8007174c2232543e2c99f7633cb4df3d02156a6d84597982f74bb9dc874d19116cf29e0234336f9f204d8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34329 - 49192 "HINFO IN 4630362296960670505.4606615800701516942. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.078771871s
	[INFO] 10.244.0.9:52396 - 38905 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000288696s
	[INFO] 10.244.0.9:52396 - 20987 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.001191283s
	[INFO] 10.244.0.22:57563 - 16703 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0002408s
	[INFO] 10.244.0.22:57939 - 52068 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000202s
	[INFO] 10.244.0.22:50828 - 18497 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.001745399s
	[INFO] 10.244.0.22:58213 - 3988 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.0018373s
	[INFO] 10.244.0.25:56812 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000344395s
	
	
	==> coredns [e9c68b34496e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ad034cdec630ea896b94a48e8befd9caaf201b38d8a8007174c2232543e2c99f7633cb4df3d02156a6d84597982f74bb9dc874d19116cf29e0234336f9f204d8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51918 - 59657 "HINFO IN 7815041532068597995.664374386818078230. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.136309422s
	[INFO] 10.244.0.9:59165 - 14367 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000423894s
	[INFO] 10.244.0.9:59165 - 47131 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000147797s
	[INFO] 10.244.0.9:46555 - 8309 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000092899s
	[INFO] 10.244.0.9:46555 - 23370 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062299s
	[INFO] 10.244.0.9:34541 - 56882 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000180898s
	[INFO] 10.244.0.9:34541 - 46128 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000076899s
	[INFO] 10.244.0.9:57952 - 6584 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092699s
	[INFO] 10.244.0.9:57952 - 63422 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000085898s
	[INFO] 10.244.0.9:44945 - 53243 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000584092s
	[INFO] 10.244.0.9:44945 - 36606 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000481493s
	[INFO] 10.244.0.9:60375 - 50150 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000244097s
	[INFO] 10.244.0.9:60375 - 15589 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112098s
	[INFO] 10.244.0.9:43930 - 58869 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080199s
	[INFO] 10.244.0.9:43930 - 1019 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000167398s
	[INFO] 10.244.0.22:45731 - 1296 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0003685s
	[INFO] 10.244.0.22:44247 - 6635 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001402s
	[INFO] 10.244.0.22:39075 - 32224 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001183s
	[INFO] 10.244.0.22:46258 - 2023 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0002575s
	[INFO] 10.244.0.25:52479 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000287495s
	
	
	==> describe nodes <==
	Name:               addons-839400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-839400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=addons-839400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T10_42_59_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-839400
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-839400"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 10:42:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-839400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 10:47:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 10:47:05 +0000   Mon, 29 Apr 2024 10:42:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 10:47:05 +0000   Mon, 29 Apr 2024 10:42:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 10:47:05 +0000   Mon, 29 Apr 2024 10:42:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 10:47:05 +0000   Mon, 29 Apr 2024 10:43:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.182.147
	  Hostname:    addons-839400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 22300bc512544ba1b3a7364e94935546
	  System UUID:                098b8cd4-d1bc-1943-8bef-5f798baa2ed3
	  Boot ID:                    1987a612-01b1-4922-9e7c-3d5415add6f8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     task-pv-pod-restore                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         0s
	  gadget                      gadget-rvgjq                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  gcp-auth                    gcp-auth-5db96cd9b4-fs8kt                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-hl5r2    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m39s
	  kube-system                 coredns-7db6d8ff4d-8cdqb                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m9s
	  kube-system                 coredns-7db6d8ff4d-bnv6h                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m9s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 csi-hostpathplugin-jnpxf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 etcd-addons-839400                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-apiserver-addons-839400                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-controller-manager-addons-839400        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-j2xmk                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	  kube-system                 kube-scheduler-addons-839400                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 nvidia-device-plugin-daemonset-fp9v2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 snapshot-controller-745499f584-f9q5v         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 snapshot-controller-745499f584-g8zvx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 tiller-deploy-6677d64bcd-pgj25               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  local-path-storage          local-path-provisioner-8d985888d-2g9jh       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-vzlb4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             458Mi (11%!)(MISSING)  596Mi (15%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m     kube-proxy       
	  Normal  Starting                 4m25s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m25s  kubelet          Node addons-839400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s  kubelet          Node addons-839400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s  kubelet          Node addons-839400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m20s  kubelet          Node addons-839400 status is now: NodeReady
	  Normal  RegisteredNode           4m12s  node-controller  Node addons-839400 event: Registered Node addons-839400 in Controller
	
	
	==> dmesg <==
	[  +0.154094] kauditd_printk_skb: 62 callbacks suppressed
	[Apr29 10:43] systemd-fstab-generator[2344]: Ignoring "noauto" option for root device
	[  +0.584955] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.564518] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.013549] kauditd_printk_skb: 42 callbacks suppressed
	[  +8.935206] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.447058] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.080144] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.008982] kauditd_printk_skb: 91 callbacks suppressed
	[  +5.088011] kauditd_printk_skb: 71 callbacks suppressed
	[Apr29 10:45] kauditd_printk_skb: 6 callbacks suppressed
	[ +22.791296] kauditd_printk_skb: 24 callbacks suppressed
	[  +9.460551] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.050528] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.676539] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.521854] kauditd_printk_skb: 73 callbacks suppressed
	[Apr29 10:46] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.948072] kauditd_printk_skb: 20 callbacks suppressed
	[ +13.085844] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.255264] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.007014] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.005875] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.047777] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.069172] kauditd_printk_skb: 21 callbacks suppressed
	[Apr29 10:47] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [1ab6771e6c59] <==
	{"level":"warn","ts":"2024-04-29T10:46:25.30364Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.981437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T10:46:25.303884Z","caller":"traceutil/trace.go:171","msg":"trace[1836806254] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1240; }","duration":"254.253537ms","start":"2024-04-29T10:46:25.049621Z","end":"2024-04-29T10:46:25.303875Z","steps":["trace[1836806254] 'range keys from in-memory index tree'  (duration: 253.892737ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T10:46:51.047805Z","caller":"traceutil/trace.go:171","msg":"trace[1141592374] linearizableReadLoop","detail":"{readStateIndex:1456; appliedIndex:1455; }","duration":"184.524452ms","start":"2024-04-29T10:46:50.863262Z","end":"2024-04-29T10:46:51.047786Z","steps":["trace[1141592374] 'read index received'  (duration: 184.360452ms)","trace[1141592374] 'applied index is now lower than readState.Index'  (duration: 163.5µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T10:46:51.048185Z","caller":"traceutil/trace.go:171","msg":"trace[2024191943] transaction","detail":"{read_only:false; response_revision:1387; number_of_response:1; }","duration":"573.172573ms","start":"2024-04-29T10:46:50.475Z","end":"2024-04-29T10:46:51.048172Z","steps":["trace[2024191943] 'process raft request'  (duration: 572.667872ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T10:46:51.048415Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T10:46:50.474987Z","time spent":"573.307973ms","remote":"127.0.0.1:49194","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1364 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-29T10:46:51.048629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.364653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/test-local-path\" ","response":"range_response_count:1 size:3882"}
	{"level":"info","ts":"2024-04-29T10:46:51.048662Z","caller":"traceutil/trace.go:171","msg":"trace[748606386] range","detail":"{range_begin:/registry/pods/default/test-local-path; range_end:; response_count:1; response_revision:1387; }","duration":"185.423953ms","start":"2024-04-29T10:46:50.863229Z","end":"2024-04-29T10:46:51.048653Z","steps":["trace[748606386] 'agreement among raft nodes before linearized reading'  (duration: 185.320653ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T10:46:51.048815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.728728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-04-29T10:46:51.048837Z","caller":"traceutil/trace.go:171","msg":"trace[841295731] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1387; }","duration":"155.774028ms","start":"2024-04-29T10:46:50.893057Z","end":"2024-04-29T10:46:51.048831Z","steps":["trace[841295731] 'agreement among raft nodes before linearized reading'  (duration: 155.713128ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T10:46:51.713437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"297.803642ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11779322233578276600 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/default/test-local-path\" mod_revision:1330 > success:<request_put:<key:\"/registry/pods/default/test-local-path\" value_size:3833 >> failure:<request_range:<key:\"/registry/pods/default/test-local-path\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T10:46:51.713687Z","caller":"traceutil/trace.go:171","msg":"trace[1139255220] linearizableReadLoop","detail":"{readStateIndex:1458; appliedIndex:1456; }","duration":"507.210412ms","start":"2024-04-29T10:46:51.206464Z","end":"2024-04-29T10:46:51.713675Z","steps":["trace[1139255220] 'read index received'  (duration: 209.06217ms)","trace[1139255220] 'applied index is now lower than readState.Index'  (duration: 298.147742ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T10:46:51.713753Z","caller":"traceutil/trace.go:171","msg":"trace[428135807] transaction","detail":"{read_only:false; response_revision:1389; number_of_response:1; }","duration":"655.527132ms","start":"2024-04-29T10:46:51.058219Z","end":"2024-04-29T10:46:51.713746Z","steps":["trace[428135807] 'process raft request'  (duration: 655.375432ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T10:46:51.713798Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T10:46:51.058207Z","time spent":"655.563532ms","remote":"127.0.0.1:49294","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1358 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-04-29T10:46:51.713835Z","caller":"traceutil/trace.go:171","msg":"trace[1355354426] transaction","detail":"{read_only:false; response_revision:1388; number_of_response:1; }","duration":"661.057437ms","start":"2024-04-29T10:46:51.052764Z","end":"2024-04-29T10:46:51.713821Z","steps":["trace[1355354426] 'process raft request'  (duration: 362.753495ms)","trace[1355354426] 'compare'  (duration: 297.689642ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T10:46:51.713898Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T10:46:51.052748Z","time spent":"661.118237ms","remote":"127.0.0.1:49208","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3879,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/test-local-path\" mod_revision:1330 > success:<request_put:<key:\"/registry/pods/default/test-local-path\" value_size:3833 >> failure:<request_range:<key:\"/registry/pods/default/test-local-path\" > >"}
	{"level":"info","ts":"2024-04-29T10:46:51.713922Z","caller":"traceutil/trace.go:171","msg":"trace[1246654945] transaction","detail":"{read_only:false; response_revision:1390; number_of_response:1; }","duration":"446.724363ms","start":"2024-04-29T10:46:51.26719Z","end":"2024-04-29T10:46:51.713915Z","steps":["trace[1246654945] 'process raft request'  (duration: 446.455363ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T10:46:51.71396Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T10:46:51.267168Z","time spent":"446.771863ms","remote":"127.0.0.1:49294","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-7etl4vaonbj6wpeioa4j74bpjq\" mod_revision:1334 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-7etl4vaonbj6wpeioa4j74bpjq\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-7etl4vaonbj6wpeioa4j74bpjq\" > >"}
	{"level":"warn","ts":"2024-04-29T10:46:51.714098Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"507.613212ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:10397"}
	{"level":"info","ts":"2024-04-29T10:46:51.714119Z","caller":"traceutil/trace.go:171","msg":"trace[1584341303] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1390; }","duration":"507.908712ms","start":"2024-04-29T10:46:51.206204Z","end":"2024-04-29T10:46:51.714113Z","steps":["trace[1584341303] 'agreement among raft nodes before linearized reading'  (duration: 507.801412ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T10:46:51.714154Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T10:46:51.206183Z","time spent":"507.964912ms","remote":"127.0.0.1:49208","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":3,"response size":10420,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-04-29T10:46:51.714299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.436387ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-h8b6k.17caba55daac3918\" ","response":"range_response_count:1 size:866"}
	{"level":"info","ts":"2024-04-29T10:46:51.714326Z","caller":"traceutil/trace.go:171","msg":"trace[686817283] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-c59844bb4-h8b6k.17caba55daac3918; range_end:; response_count:1; response_revision:1390; }","duration":"353.488887ms","start":"2024-04-29T10:46:51.360828Z","end":"2024-04-29T10:46:51.714317Z","steps":["trace[686817283] 'agreement among raft nodes before linearized reading'  (duration: 353.414787ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T10:46:51.714345Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T10:46:51.360813Z","time spent":"353.526387ms","remote":"127.0.0.1:49096","response type":"/etcdserverpb.KV/Range","request count":0,"request size":78,"response count":1,"response size":889,"request content":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-h8b6k.17caba55daac3918\" "}
	{"level":"warn","ts":"2024-04-29T10:46:51.714761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.005595ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:10397"}
	{"level":"info","ts":"2024-04-29T10:46:51.714786Z","caller":"traceutil/trace.go:171","msg":"trace[97132143] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1390; }","duration":"240.059695ms","start":"2024-04-29T10:46:51.47472Z","end":"2024-04-29T10:46:51.714779Z","steps":["trace[97132143] 'agreement among raft nodes before linearized reading'  (duration: 239.980595ms)"],"step_count":1}
	
	
	==> gcp-auth [49a8b6fc4434] <==
	2024/04/29 10:46:25 GCP Auth Webhook started!
	2024/04/29 10:46:28 Ready to marshal response ...
	2024/04/29 10:46:28 Ready to write response ...
	2024/04/29 10:46:28 Ready to marshal response ...
	2024/04/29 10:46:28 Ready to write response ...
	2024/04/29 10:46:38 Ready to marshal response ...
	2024/04/29 10:46:38 Ready to write response ...
	2024/04/29 10:46:42 Ready to marshal response ...
	2024/04/29 10:46:42 Ready to write response ...
	2024/04/29 10:46:52 Ready to marshal response ...
	2024/04/29 10:46:52 Ready to write response ...
	2024/04/29 10:47:24 Ready to marshal response ...
	2024/04/29 10:47:24 Ready to write response ...
	
	
	==> kernel <==
	 10:47:24 up 6 min,  0 users,  load average: 3.58, 2.79, 1.27
	Linux addons-839400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d3c75bce2bc1] <==
	Trace[1428887173]: [814.003236ms] [814.003236ms] END
	I0429 10:46:22.209562       1 trace.go:236] Trace[391502970]: "Update" accept:application/json, */*,audit-id:16e6205e-fe29-4321-a459-88b7eb68ed1f,client:172.26.182.147,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 10:46:21.374) (total time: 834ms):
	Trace[391502970]: ["GuaranteedUpdate etcd3" audit-id:16e6205e-fe29-4321-a459-88b7eb68ed1f,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 834ms (10:46:21.375)
	Trace[391502970]:  ---"Txn call completed" 831ms (10:46:22.209)]
	Trace[391502970]: [834.655732ms] [834.655732ms] END
	I0429 10:46:22.210336       1 trace.go:236] Trace[770704210]: "List" accept:application/json, */*,audit-id:e557a585-0c5d-44bf-87ec-099ebef7d5a9,client:172.26.176.1,api-group:,api-version:v1,name:,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (29-Apr-2024 10:46:21.526) (total time: 683ms):
	Trace[770704210]: ["List(recursive=true) etcd3" audit-id:e557a585-0c5d-44bf-87ec-099ebef7d5a9,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 683ms (10:46:21.526)]
	Trace[770704210]: [683.559362ms] [683.559362ms] END
	I0429 10:46:51.049728       1 trace.go:236] Trace[1245636070]: "Update" accept:application/json, */*,audit-id:44c27426-2bc4-462f-b653-8a886a8ea676,client:172.26.182.147,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 10:46:50.473) (total time: 576ms):
	Trace[1245636070]: ["GuaranteedUpdate etcd3" audit-id:44c27426-2bc4-462f-b653-8a886a8ea676,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 576ms (10:46:50.473)
	Trace[1245636070]:  ---"Txn call completed" 574ms (10:46:51.049)]
	Trace[1245636070]: [576.405276ms] [576.405276ms] END
	I0429 10:46:51.716967       1 trace.go:236] Trace[583783080]: "Update" accept:application/json, */*,audit-id:acea60e1-69a1-4581-bfd4-7937b7d8de64,client:10.244.0.11,api-group:coordination.k8s.io,api-version:v1,name:snapshot-controller-leader,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/snapshot-controller-leader,user-agent:snapshot-controller/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 10:46:51.055) (total time: 661ms):
	Trace[583783080]: ["GuaranteedUpdate etcd3" audit-id:acea60e1-69a1-4581-bfd4-7937b7d8de64,key:/leases/kube-system/snapshot-controller-leader,type:*coordination.Lease,resource:leases.coordination.k8s.io 660ms (10:46:51.056)
	Trace[583783080]:  ---"Txn call completed" 659ms (10:46:51.716)]
	Trace[583783080]: [661.009637ms] [661.009637ms] END
	I0429 10:46:51.728187       1 trace.go:236] Trace[1012853744]: "List" accept:application/json, */*,audit-id:e69e83ed-65f8-4d7e-afb5-e2584e71c3b3,client:172.26.176.1,api-group:,api-version:v1,name:,subresource:,namespace:default,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (29-Apr-2024 10:46:51.205) (total time: 522ms):
	Trace[1012853744]: ["List(recursive=true) etcd3" audit-id:e69e83ed-65f8-4d7e-afb5-e2584e71c3b3,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: 522ms (10:46:51.205)]
	Trace[1012853744]: [522.928325ms] [522.928325ms] END
	I0429 10:46:51.763337       1 trace.go:236] Trace[2011562259]: "Delete" accept:application/json,audit-id:fbf5b5a8-3720-4cdd-bcf7-b754f89fead9,client:172.26.176.1,api-group:,api-version:v1,name:test-local-path,subresource:,namespace:default,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/default/pods/test-local-path,user-agent:kubectl/v1.30.0 (windows/amd64) kubernetes/7c48c2b,verb:DELETE (29-Apr-2024 10:46:50.862) (total time: 901ms):
	Trace[2011562259]: ["GuaranteedUpdate etcd3" audit-id:fbf5b5a8-3720-4cdd-bcf7-b754f89fead9,key:/pods/default/test-local-path,type:*core.Pod,resource:pods 711ms (10:46:51.051)
	Trace[2011562259]:  ---"Txn call completed" 673ms (10:46:51.725)]
	Trace[2011562259]: ---"Object deleted from database" 36ms (10:46:51.762)
	Trace[2011562259]: [901.086433ms] [901.086433ms] END
	I0429 10:47:06.719200       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [1395ae371650] <==
	I0429 10:45:51.191000       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 10:45:51.207034       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 10:45:51.688070       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 10:45:51.832330       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0429 10:45:52.074763       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 10:45:52.091575       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 10:45:52.104548       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 10:45:52.112734       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0429 10:45:52.848645       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0429 10:45:52.879971       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0429 10:45:52.890452       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0429 10:45:52.930027       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0429 10:46:22.323005       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 10:46:22.342021       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 10:46:22.463456       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 10:46:22.485088       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 10:46:23.582307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="79.3µs"
	I0429 10:46:26.848697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="26.461593ms"
	I0429 10:46:26.849304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="41.8µs"
	I0429 10:46:32.431074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="77.662175ms"
	I0429 10:46:32.447822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="317.401µs"
	I0429 10:46:48.726852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="9.2µs"
	I0429 10:47:02.431133       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="8.9µs"
	I0429 10:47:10.172995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-8677549d7" duration="5.8µs"
	I0429 10:47:10.621404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-8d985888d" duration="7.999µs"
	
	
	==> kube-proxy [d028e594e270] <==
	I0429 10:43:22.729290       1 server_linux.go:69] "Using iptables proxy"
	I0429 10:43:22.986139       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.26.182.147"]
	I0429 10:43:24.133035       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 10:43:24.133099       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 10:43:24.133239       1 server_linux.go:165] "Using iptables Proxier"
	I0429 10:43:24.184792       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 10:43:24.185780       1 server.go:872] "Version info" version="v1.30.0"
	I0429 10:43:24.200807       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 10:43:24.232064       1 config.go:192] "Starting service config controller"
	I0429 10:43:24.232297       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 10:43:24.232571       1 config.go:101] "Starting endpoint slice config controller"
	I0429 10:43:24.232759       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 10:43:24.313064       1 config.go:319] "Starting node config controller"
	I0429 10:43:24.313111       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 10:43:24.377243       1 shared_informer.go:320] Caches are synced for service config
	I0429 10:43:24.415524       1 shared_informer.go:320] Caches are synced for node config
	I0429 10:43:24.464541       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [79fd60fee7a8] <==
	W0429 10:42:56.887972       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 10:42:56.888461       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 10:42:56.889250       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 10:42:56.889996       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 10:42:56.941851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 10:42:56.941962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 10:42:56.981629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 10:42:56.981763       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 10:42:57.004878       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 10:42:57.005378       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 10:42:57.047332       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 10:42:57.047654       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 10:42:57.067927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 10:42:57.067969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 10:42:57.105877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 10:42:57.106082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 10:42:57.110676       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 10:42:57.111005       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 10:42:57.154595       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 10:42:57.154954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 10:42:57.254110       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 10:42:57.254653       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 10:42:57.260044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 10:42:57.260306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0429 10:42:59.964990       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 10:47:10 addons-839400 kubelet[2133]: I0429 10:47:10.909689    2133 scope.go:117] "RemoveContainer" containerID="06e7af0f1f1521be663ca3a57a02fb2a0aef92b738cb50d40c4e9062e0dc207c"
	Apr 29 10:47:10 addons-839400 kubelet[2133]: I0429 10:47:10.948292    2133 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjcrc\" (UniqueName: \"kubernetes.io/projected/be26d83f-8541-45f4-b635-6f793ac7f331-kube-api-access-wjcrc\") pod \"be26d83f-8541-45f4-b635-6f793ac7f331\" (UID: \"be26d83f-8541-45f4-b635-6f793ac7f331\") "
	Apr 29 10:47:10 addons-839400 kubelet[2133]: I0429 10:47:10.959983    2133 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be26d83f-8541-45f4-b635-6f793ac7f331-kube-api-access-wjcrc" (OuterVolumeSpecName: "kube-api-access-wjcrc") pod "be26d83f-8541-45f4-b635-6f793ac7f331" (UID: "be26d83f-8541-45f4-b635-6f793ac7f331"). InnerVolumeSpecName "kube-api-access-wjcrc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 10:47:11 addons-839400 kubelet[2133]: I0429 10:47:11.049269    2133 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wjcrc\" (UniqueName: \"kubernetes.io/projected/be26d83f-8541-45f4-b635-6f793ac7f331-kube-api-access-wjcrc\") on node \"addons-839400\" DevicePath \"\""
	Apr 29 10:47:11 addons-839400 kubelet[2133]: I0429 10:47:11.334062    2133 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad1be51-9f81-4aad-b5df-3e2a7bfe1426" path="/var/lib/kubelet/pods/5ad1be51-9f81-4aad-b5df-3e2a7bfe1426/volumes"
	Apr 29 10:47:13 addons-839400 kubelet[2133]: I0429 10:47:13.345190    2133 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be26d83f-8541-45f4-b635-6f793ac7f331" path="/var/lib/kubelet/pods/be26d83f-8541-45f4-b635-6f793ac7f331/volumes"
	Apr 29 10:47:19 addons-839400 kubelet[2133]: I0429 10:47:19.311240    2133 scope.go:117] "RemoveContainer" containerID="7b6f2568622963d74e5fa870c6cd1863f6e8615b36c5a7ff4a381b299d1cccfe"
	Apr 29 10:47:19 addons-839400 kubelet[2133]: E0429 10:47:19.312620    2133 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 40s restarting failed container=gadget pod=gadget-rvgjq_gadget(895a74bb-fc17-4db4-aabe-9953a75526b3)\"" pod="gadget/gadget-rvgjq" podUID="895a74bb-fc17-4db4-aabe-9953a75526b3"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: I0429 10:47:24.105887    2133 topology_manager.go:215] "Topology Admit Handler" podUID="dab539aa-d8e2-47c1-9e64-f42baab1fa1e" podNamespace="default" podName="task-pv-pod-restore"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: E0429 10:47:24.106011    2133 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd91972e-7309-42de-972b-4e836b093c94" containerName="registry"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: E0429 10:47:24.106027    2133 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ad1be51-9f81-4aad-b5df-3e2a7bfe1426" containerName="task-pv-container"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: E0429 10:47:24.106036    2133 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be26d83f-8541-45f4-b635-6f793ac7f331" containerName="cloud-spanner-emulator"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: E0429 10:47:24.106045    2133 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="556ba331-0f4c-4c0b-a8cb-9ceaf9b76463" containerName="registry-proxy"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: E0429 10:47:24.106055    2133 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="268e1989-9866-4928-adab-9f2ff85ea084" containerName="metrics-server"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: E0429 10:47:24.106064    2133 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64f235ba-5fe5-4707-9494-4d52083acf2c" containerName="helper-pod"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: I0429 10:47:24.106113    2133 memory_manager.go:354] "RemoveStaleState removing state" podUID="64f235ba-5fe5-4707-9494-4d52083acf2c" containerName="helper-pod"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: I0429 10:47:24.106123    2133 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad1be51-9f81-4aad-b5df-3e2a7bfe1426" containerName="task-pv-container"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: I0429 10:47:24.106133    2133 memory_manager.go:354] "RemoveStaleState removing state" podUID="be26d83f-8541-45f4-b635-6f793ac7f331" containerName="cloud-spanner-emulator"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: I0429 10:47:24.109523    2133 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd91972e-7309-42de-972b-4e836b093c94" containerName="registry"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: I0429 10:47:24.109536    2133 memory_manager.go:354] "RemoveStaleState removing state" podUID="268e1989-9866-4928-adab-9f2ff85ea084" containerName="metrics-server"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: I0429 10:47:24.109549    2133 memory_manager.go:354] "RemoveStaleState removing state" podUID="556ba331-0f4c-4c0b-a8cb-9ceaf9b76463" containerName="registry-proxy"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: I0429 10:47:24.281515    2133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0cbe11a2-3502-4d69-a0df-cfc93ad4594f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^dc3bbe2a-0615-11ef-afdb-16d5102f3895\") pod \"task-pv-pod-restore\" (UID: \"dab539aa-d8e2-47c1-9e64-f42baab1fa1e\") " pod="default/task-pv-pod-restore"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: I0429 10:47:24.281694    2133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szqwp\" (UniqueName: \"kubernetes.io/projected/dab539aa-d8e2-47c1-9e64-f42baab1fa1e-kube-api-access-szqwp\") pod \"task-pv-pod-restore\" (UID: \"dab539aa-d8e2-47c1-9e64-f42baab1fa1e\") " pod="default/task-pv-pod-restore"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: I0429 10:47:24.281815    2133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/dab539aa-d8e2-47c1-9e64-f42baab1fa1e-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"dab539aa-d8e2-47c1-9e64-f42baab1fa1e\") " pod="default/task-pv-pod-restore"
	Apr 29 10:47:24 addons-839400 kubelet[2133]: I0429 10:47:24.402893    2133 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-0cbe11a2-3502-4d69-a0df-cfc93ad4594f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^dc3bbe2a-0615-11ef-afdb-16d5102f3895\") pod \"task-pv-pod-restore\" (UID: \"dab539aa-d8e2-47c1-9e64-f42baab1fa1e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/b814927838a05078c68630907fbfb473050526756b0fa765b14887e9f3ec1ad6/globalmount\"" pod="default/task-pv-pod-restore"
	
	
	==> storage-provisioner [33e12d091954] <==
	I0429 10:43:43.911531       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 10:43:44.081759       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 10:43:44.128487       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 10:43:44.297085       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 10:43:44.321755       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-839400_6b907c24-c1db-4cd0-839a-d3210178650e!
	I0429 10:43:44.336433       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"662d1a08-69cf-475c-86a8-073867a595ee", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-839400_6b907c24-c1db-4cd0-839a-d3210178650e became leader
	I0429 10:43:44.528694       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-839400_6b907c24-c1db-4cd0-839a-d3210178650e!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 10:47:16.072504    6924 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-839400 -n addons-839400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-839400 -n addons-839400: (12.6661356s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-839400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-b8dxs ingress-nginx-admission-patch-4vbct
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-839400 describe pod ingress-nginx-admission-create-b8dxs ingress-nginx-admission-patch-4vbct
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-839400 describe pod ingress-nginx-admission-create-b8dxs ingress-nginx-admission-patch-4vbct: exit status 1 (171.249ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-b8dxs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4vbct" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-839400 describe pod ingress-nginx-admission-create-b8dxs ingress-nginx-admission-patch-4vbct: exit status 1
--- FAIL: TestAddons/parallel/Registry (72.10s)

                                                
                                    
x
+
TestErrorSpam/setup (195.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-205500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 --driver=hyperv
E0429 10:51:27.421152    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:51:27.430709    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:51:27.451230    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:51:27.481997    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:51:27.529590    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:51:27.625111    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:51:27.799925    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:51:28.134584    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:51:28.786386    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:51:30.075271    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:51:32.642383    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:51:37.778228    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:51:48.027606    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:52:08.508853    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:52:49.483588    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 10:54:11.406912    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-205500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 --driver=hyperv: (3m15.1213321s)
error_spam_test.go:96: unexpected stderr: "W0429 10:51:24.931610   12904 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-205500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=18756
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-205500" primary control-plane node in "nospam-205500" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-205500" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0429 10:51:24.931610   12904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (195.12s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (282.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-197400 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-197400 --alsologtostderr -v=8: exit status 90 (2m29.4695075s)

                                                
                                                
-- stdout --
	* [functional-197400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-197400" primary control-plane node in "functional-197400" cluster
	* Updating the running hyperv "functional-197400" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:01:30.366203   13764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 11:01:30.445059   13764 out.go:291] Setting OutFile to fd 884 ...
	I0429 11:01:30.445789   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:01:30.445789   13764 out.go:304] Setting ErrFile to fd 280...
	I0429 11:01:30.445789   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:01:30.469783   13764 out.go:298] Setting JSON to false
	I0429 11:01:30.474075   13764 start.go:129] hostinfo: {"hostname":"minikube6","uptime":29963,"bootTime":1714358527,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 11:01:30.474075   13764 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 11:01:30.478082   13764 out.go:177] * [functional-197400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 11:01:30.484053   13764 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:01:30.482999   13764 notify.go:220] Checking for updates...
	I0429 11:01:30.487059   13764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:01:30.489426   13764 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 11:01:30.492314   13764 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 11:01:30.494672   13764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:01:30.497561   13764 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:01:30.498504   13764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:01:35.797581   13764 out.go:177] * Using the hyperv driver based on existing profile
	I0429 11:01:35.800821   13764 start.go:297] selected driver: hyperv
	I0429 11:01:35.800821   13764 start.go:901] validating driver "hyperv" against &{Name:functional-197400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:functional-197400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.82 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:01:35.800821   13764 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 11:01:35.854447   13764 cni.go:84] Creating CNI manager for ""
	I0429 11:01:35.854447   13764 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 11:01:35.855168   13764 start.go:340] cluster config:
	{Name:functional-197400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-197400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.82 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:01:35.855712   13764 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:01:35.860024   13764 out.go:177] * Starting "functional-197400" primary control-plane node in "functional-197400" cluster
	I0429 11:01:35.862486   13764 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 11:01:35.862966   13764 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 11:01:35.862966   13764 cache.go:56] Caching tarball of preloaded images
	I0429 11:01:35.863088   13764 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 11:01:35.863509   13764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 11:01:35.863697   13764 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\config.json ...
	I0429 11:01:35.865973   13764 start.go:360] acquireMachinesLock for functional-197400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 11:01:35.865973   13764 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-197400"
	I0429 11:01:35.865973   13764 start.go:96] Skipping create...Using existing machine configuration
	I0429 11:01:35.866728   13764 fix.go:54] fixHost starting: 
	I0429 11:01:35.866814   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:38.565164   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:38.566072   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:38.566072   13764 fix.go:112] recreateIfNeeded on functional-197400: state=Running err=<nil>
	W0429 11:01:38.566163   13764 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 11:01:38.570099   13764 out.go:177] * Updating the running hyperv "functional-197400" VM ...
	I0429 11:01:38.572589   13764 machine.go:94] provisionDockerMachine start ...
	I0429 11:01:38.572790   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:40.728211   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:40.729260   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:40.729260   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:43.337044   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:43.338056   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:43.344719   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:43.344884   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:43.344884   13764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 11:01:43.492864   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-197400
	
	I0429 11:01:43.493032   13764 buildroot.go:166] provisioning hostname "functional-197400"
	I0429 11:01:43.493146   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:45.594418   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:45.594418   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:45.595027   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:48.145598   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:48.145598   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:48.153963   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:48.154713   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:48.154713   13764 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-197400 && echo "functional-197400" | sudo tee /etc/hostname
	I0429 11:01:48.322635   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-197400
	
	I0429 11:01:48.322635   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:50.425088   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:50.425088   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:50.426116   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:52.996130   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:52.996130   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:53.002862   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:53.003355   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:53.003457   13764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-197400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-197400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-197400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:01:53.146326   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:01:53.146326   13764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 11:01:53.146326   13764 buildroot.go:174] setting up certificates
	I0429 11:01:53.146326   13764 provision.go:84] configureAuth start
	I0429 11:01:53.146326   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:57.763195   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:57.763363   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:57.763439   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:59.852676   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:59.852676   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:59.853320   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:02.368053   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:02.368053   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:02.368674   13764 provision.go:143] copyHostCerts
	I0429 11:02:02.369074   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 11:02:02.369383   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 11:02:02.369383   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 11:02:02.369931   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 11:02:02.370685   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 11:02:02.370685   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 11:02:02.370685   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 11:02:02.371650   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 11:02:02.372440   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 11:02:02.372519   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 11:02:02.372519   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 11:02:02.373046   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 11:02:02.374016   13764 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-197400 san=[127.0.0.1 172.26.179.82 functional-197400 localhost minikube]
	I0429 11:02:02.495876   13764 provision.go:177] copyRemoteCerts
	I0429 11:02:02.510020   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:02:02.510020   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:04.618809   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:04.619542   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:04.619542   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:07.167725   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:07.167725   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:07.168803   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:07.282611   13764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7725535s)
	I0429 11:02:07.282611   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 11:02:07.282611   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 11:02:07.334346   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 11:02:07.334955   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 11:02:07.390221   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 11:02:07.391689   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 11:02:07.447983   13764 provision.go:87] duration metric: took 14.3015428s to configureAuth
	I0429 11:02:07.448063   13764 buildroot.go:189] setting minikube options for container-runtime
	I0429 11:02:07.448063   13764 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:02:07.448747   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:09.549776   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:09.549776   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:09.550299   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:12.117228   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:12.117228   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:12.123983   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:12.124562   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:12.124562   13764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 11:02:12.266791   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 11:02:12.267014   13764 buildroot.go:70] root file system type: tmpfs
	I0429 11:02:12.267189   13764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 11:02:12.267262   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:14.408118   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:14.408560   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:14.408560   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:16.960938   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:16.961202   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:16.967669   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:16.968259   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:16.968427   13764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 11:02:17.143647   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 11:02:17.143855   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:21.747577   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:21.747577   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:21.755006   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:21.755589   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:21.755589   13764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 11:02:21.897946   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:02:21.897946   13764 machine.go:97] duration metric: took 43.3250104s to provisionDockerMachine
	I0429 11:02:21.897946   13764 start.go:293] postStartSetup for "functional-197400" (driver="hyperv")
	I0429 11:02:21.897946   13764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:02:21.911428   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:02:21.911428   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:26.501393   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:26.501393   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:26.502118   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:26.619226   13764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7077235s)
	I0429 11:02:26.634064   13764 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:02:26.641916   13764 command_runner.go:130] > NAME=Buildroot
	I0429 11:02:26.641916   13764 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 11:02:26.641916   13764 command_runner.go:130] > ID=buildroot
	I0429 11:02:26.641916   13764 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 11:02:26.641916   13764 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 11:02:26.641916   13764 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 11:02:26.641916   13764 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 11:02:26.642478   13764 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 11:02:26.643334   13764 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 11:02:26.643334   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 11:02:26.644676   13764 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts -> hosts in /etc/test/nested/copy/8496
	I0429 11:02:26.644676   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts -> /etc/test/nested/copy/8496/hosts
	I0429 11:02:26.657704   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8496
	I0429 11:02:26.682055   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 11:02:26.741547   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts --> /etc/test/nested/copy/8496/hosts (40 bytes)
	I0429 11:02:26.792541   13764 start.go:296] duration metric: took 4.8945563s for postStartSetup
	I0429 11:02:26.792541   13764 fix.go:56] duration metric: took 50.9254062s for fixHost
	I0429 11:02:26.792541   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:31.379441   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:31.379441   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:31.385529   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:31.385992   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:31.385992   13764 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 11:02:31.514751   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714388551.520714576
	
	I0429 11:02:31.514751   13764 fix.go:216] guest clock: 1714388551.520714576
	I0429 11:02:31.514751   13764 fix.go:229] Guest: 2024-04-29 11:02:31.520714576 +0000 UTC Remote: 2024-04-29 11:02:26.7925417 +0000 UTC m=+56.526311901 (delta=4.728172876s)
	I0429 11:02:31.514751   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:33.581114   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:33.581995   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:33.581995   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:36.123230   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:36.123230   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:36.130279   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:36.131025   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:36.131025   13764 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714388551
	I0429 11:02:36.291751   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 11:02:31 UTC 2024
	
	I0429 11:02:36.291751   13764 fix.go:236] clock set: Mon Apr 29 11:02:31 UTC 2024
	 (err=<nil>)
	I0429 11:02:36.291751   13764 start.go:83] releasing machines lock for "functional-197400", held for 1m0.4252951s
	I0429 11:02:36.291751   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:38.419288   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:38.419288   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:38.419682   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:40.996072   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:40.996072   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:41.001337   13764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:02:41.001536   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:41.013399   13764 ssh_runner.go:195] Run: cat /version.json
	I0429 11:02:41.013399   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:43.158321   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:43.158321   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:43.159330   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:45.835688   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:45.836385   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:45.836904   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:45.861347   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:45.861347   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:45.862776   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:45.935735   13764 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 11:02:45.936039   13764 ssh_runner.go:235] Completed: cat /version.json: (4.9226007s)
	I0429 11:02:45.950826   13764 ssh_runner.go:195] Run: systemctl --version
	I0429 11:02:46.011745   13764 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 11:02:46.011850   13764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0103766s)
	I0429 11:02:46.011850   13764 command_runner.go:130] > systemd 252 (252)
	I0429 11:02:46.011999   13764 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 11:02:46.026211   13764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 11:02:46.035440   13764 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 11:02:46.035904   13764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 11:02:46.048490   13764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:02:46.067930   13764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 11:02:46.067930   13764 start.go:494] detecting cgroup driver to use...
	I0429 11:02:46.068188   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:02:46.104796   13764 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 11:02:46.118218   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 11:02:46.152176   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 11:02:46.174564   13764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 11:02:46.187378   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 11:02:46.221768   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:02:46.255412   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 11:02:46.290318   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:02:46.325497   13764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:02:46.367045   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 11:02:46.403208   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 11:02:46.442281   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 11:02:46.478926   13764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:02:46.499867   13764 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 11:02:46.513297   13764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:02:46.549431   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:02:46.855826   13764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 11:02:46.905389   13764 start.go:494] detecting cgroup driver to use...
	I0429 11:02:46.922503   13764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 11:02:46.951373   13764 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 11:02:46.951373   13764 command_runner.go:130] > [Unit]
	I0429 11:02:46.951373   13764 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 11:02:46.951373   13764 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 11:02:46.951373   13764 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 11:02:46.951470   13764 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 11:02:46.951470   13764 command_runner.go:130] > StartLimitBurst=3
	I0429 11:02:46.951470   13764 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 11:02:46.951470   13764 command_runner.go:130] > [Service]
	I0429 11:02:46.951507   13764 command_runner.go:130] > Type=notify
	I0429 11:02:46.951507   13764 command_runner.go:130] > Restart=on-failure
	I0429 11:02:46.951507   13764 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 11:02:46.951552   13764 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 11:02:46.951552   13764 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 11:02:46.951643   13764 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 11:02:46.951643   13764 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 11:02:46.951643   13764 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 11:02:46.951687   13764 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 11:02:46.951727   13764 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 11:02:46.951727   13764 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 11:02:46.951727   13764 command_runner.go:130] > ExecStart=
	I0429 11:02:46.951791   13764 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 11:02:46.951838   13764 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 11:02:46.951838   13764 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 11:02:46.951838   13764 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 11:02:46.951838   13764 command_runner.go:130] > LimitNOFILE=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > LimitNPROC=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > LimitCORE=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 11:02:46.951896   13764 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 11:02:46.951939   13764 command_runner.go:130] > TasksMax=infinity
	I0429 11:02:46.951939   13764 command_runner.go:130] > TimeoutStartSec=0
	I0429 11:02:46.951939   13764 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 11:02:46.951939   13764 command_runner.go:130] > Delegate=yes
	I0429 11:02:46.951939   13764 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 11:02:46.952000   13764 command_runner.go:130] > KillMode=process
	I0429 11:02:46.952000   13764 command_runner.go:130] > [Install]
	I0429 11:02:46.952000   13764 command_runner.go:130] > WantedBy=multi-user.target
	I0429 11:02:46.966498   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:02:47.010945   13764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:02:47.071693   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:02:47.111019   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:02:47.138047   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:02:47.173728   13764 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 11:02:47.188143   13764 ssh_runner.go:195] Run: which cri-dockerd
	I0429 11:02:47.196459   13764 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 11:02:47.211733   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 11:02:47.232274   13764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 11:02:47.282245   13764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 11:02:47.579073   13764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 11:02:47.847228   13764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 11:02:47.847310   13764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 11:02:47.911078   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:02:48.205114   13764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 11:03:59.569091   13764 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0429 11:03:59.569139   13764 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0429 11:03:59.569659   13764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3639778s)
	I0429 11:03:59.583436   13764 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.267173170Z" level=info msg="Starting up"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.268201295Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.269372823Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.307954249Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337171950Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337254152Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337340754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337376555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337555459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337709163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337903268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338009670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338032671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338045671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338138773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338687786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341822662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.617057   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341930064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.617127   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342068768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.617167   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342160270Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.617214   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342291773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.617232   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342561779Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.617269   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342706583Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.617269   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372846706Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.617329   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372975409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.617329   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373003310Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.617382   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373021510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.617440   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373037211Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.617474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373149113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.617531   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373464921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.617531   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373719527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.617565   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373825230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373848630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373863930Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373890031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373906532Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373921332Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373949133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373962633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373975833Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373987533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374008834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374023234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374037835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374051935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374065235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374078736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374091236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374105436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374119237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374134237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374146237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374159238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374171938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374188938Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374210239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374222939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374234739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618212   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374289741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.618255   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374332042Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374348242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374360142Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374503946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374551147Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374567747Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374816253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374962657Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375258464Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375340566Z" level=info msg="containerd successfully booted in 0.070853s"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.341207280Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.372935594Z" level=info msg="Loading containers: start."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.662471377Z" level=info msg="Loading containers: done."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686025529Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686394438Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807251972Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807726683Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.294970724Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.296140626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.297893627Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298007127Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298131828Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.373783050Z" level=info msg="Starting up"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.375739052Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.376681653Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	I0429 11:03:59.618824   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.414401489Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.618889   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443879217Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.618937   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443976817Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618937   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444032617Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.618988   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444054617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619010   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444082717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619060   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444097417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619078   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444314317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619078   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444420417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619155   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444442717Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444454017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444480517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444729817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448106421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448213221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448460321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448545421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448576221Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448595621Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448608321Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448970822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449301222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449419922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449439222Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449472722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449525422Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449797522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449993923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450015223Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450031323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450046523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450061223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450074823Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450089123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450104623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450119123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450132723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619877   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450147523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619877   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450169123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.619952   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450195823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.619975   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450213523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450228423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450242323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450317723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450340823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450355723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450370223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450386623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450404923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450419423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450433523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450450223Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450473323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450488823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450586723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450768623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450878823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450899223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450913423Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451074824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451245924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451269524Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451551924Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451703024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451799224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.452213625Z" level=info msg="containerd successfully booted in 0.040825s"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.418862644Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.441473165Z" level=info msg="Loading containers: start."
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.627479942Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.718102328Z" level=info msg="Loading containers: done."
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743113952Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743178652Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.793711400Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.794898201Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.128331474Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134282479Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134684380Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134803580Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.135077080Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.213787206Z" level=info msg="Starting up"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.215786608Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.223733215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.257297947Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285515974Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285568774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285610374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285627974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285654974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285669474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285807174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285907174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285969774Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285984174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286011074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286128374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289099977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289240777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289384778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289474878Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289505078Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289523778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289538678Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289665278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289753578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289782578Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289798778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289812978Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289861878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290650379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290847279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.621634   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291305579Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291331579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291347879Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291388179Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291418680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291448580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291464880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291477580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291490180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291506680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291528980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291545880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291563580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291578680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291590680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291602780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291614280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291626880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291639680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291658480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291677280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291691980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291721380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291739980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291796480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291812480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291829380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291878580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291897480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291908880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291974180Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292217280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292341480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292357280Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293132581Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293277181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293335781Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293541382Z" level=info msg="containerd successfully booted in 0.037246s"
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:15 functional-197400 dockerd[1330]: time="2024-04-29T11:00:15.277854617Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:17 functional-197400 dockerd[1330]: time="2024-04-29T11:00:17.927543836Z" level=info msg="Loading containers: start."
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.112045312Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.198094793Z" level=info msg="Loading containers: done."
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222645217Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222779217Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274280866Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274456266Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120296911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120512729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120543432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120660941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.186893035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187185759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187211261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.188407762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215270831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215407743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623152   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215422644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623152   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215523352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623248   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280764062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280985280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281084889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281634035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643303177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643466691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643509895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643684609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.697670368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.707267679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708026943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708256862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784290483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784407793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784468198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784707718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.819747877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821078290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821252004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.826495047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985252797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985562604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985588805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985711908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068054169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068309474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068331475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068467778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166236144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166301345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166313646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166396847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.521616981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522347196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522579101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.523240714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895048895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895152197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895172797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624251   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895676508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624251   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984381216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984458818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984485818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984841526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.507103229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509692523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509830323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.510118922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.796842343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797484742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797645142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797880641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.234836529Z" level=info msg="ignoring event" container=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235676628Z" level=info msg="shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235735628Z" level=warning msg="cleaning up after shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235745428Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.451291296Z" level=info msg="ignoring event" container=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451669095Z" level=info msg="shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451851495Z" level=warning msg="cleaning up after shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451995494Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.234860092Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450791635Z" level=info msg="shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.451090435Z" level=info msg="ignoring event" container=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450876935Z" level=warning msg="cleaning up after shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.451747135Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.482934541Z" level=info msg="ignoring event" container=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.484895642Z" level=info msg="shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.485295742Z" level=info msg="ignoring event" container=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486342742Z" level=warning msg="cleaning up after shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486585842Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486559842Z" level=info msg="shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486853242Z" level=warning msg="cleaning up after shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	I0429 11:03:59.625290   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486923642Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625290   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.494120344Z" level=info msg="ignoring event" container=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625385   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494771444Z" level=info msg="shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	I0429 11:03:59.625464   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494880444Z" level=warning msg="cleaning up after shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	I0429 11:03:59.625464   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494940744Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.507132346Z" level=info msg="ignoring event" container=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509010947Z" level=info msg="shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509090647Z" level=warning msg="cleaning up after shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	I0429 11:03:59.625678   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509108047Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625678   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531751851Z" level=info msg="ignoring event" container=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531875751Z" level=info msg="ignoring event" container=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532003151Z" level=info msg="shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532109051Z" level=warning msg="cleaning up after shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532144051Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546567054Z" level=info msg="shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546687154Z" level=warning msg="cleaning up after shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546700554Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.551454855Z" level=info msg="ignoring event" container=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.552199755Z" level=info msg="shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.553996555Z" level=warning msg="cleaning up after shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.554987256Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567471558Z" level=info msg="ignoring event" container=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567533658Z" level=info msg="ignoring event" container=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567572058Z" level=info msg="ignoring event" container=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585709762Z" level=info msg="shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585772862Z" level=warning msg="cleaning up after shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585785062Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586016062Z" level=info msg="shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586066762Z" level=warning msg="cleaning up after shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586078062Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592597763Z" level=info msg="shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592801863Z" level=warning msg="cleaning up after shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592926563Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.596528564Z" level=info msg="ignoring event" container=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.596987364Z" level=info msg="shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597025164Z" level=warning msg="cleaning up after shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597035064Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.780696301Z" level=warning msg="cleanup warnings time=\"2024-04-29T11:02:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.366929116Z" level=info msg="shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.368217817Z" level=warning msg="cleaning up after shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.369588017Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1330]: time="2024-04-29T11:02:53.370462217Z" level=info msg="ignoring event" container=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.334510807Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.391107616Z" level=info msg="ignoring event" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393713479Z" level=info msg="shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393802388Z" level=warning msg="cleaning up after shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393813489Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626648   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463540623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.626648   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463722041Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463974967Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.464010370Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Consumed 6.178s CPU time.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 dockerd[4230]: time="2024-04-29T11:02:59.547648892Z" level=info msg="Starting up"
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 dockerd[4230]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	I0429 11:03:59.655510   13764 out.go:177] 
	W0429 11:03:59.658137   13764 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 10:59:29 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.267173170Z" level=info msg="Starting up"
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.268201295Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.269372823Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.307954249Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337171950Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337254152Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337340754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337376555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337555459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337709163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337903268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338009670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338032671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338045671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338138773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338687786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341822662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341930064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342068768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342160270Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342291773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342561779Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342706583Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372846706Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372975409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373003310Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373021510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373037211Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373149113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373464921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373719527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373825230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373848630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373863930Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373890031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373906532Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373921332Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373949133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373962633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373975833Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373987533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374008834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374023234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374037835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374051935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374065235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374078736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374091236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374105436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374119237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374134237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374146237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374159238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374171938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374188938Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374210239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374222939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374234739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374289741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374332042Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374348242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374360142Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374503946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374551147Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374567747Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374816253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374962657Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375258464Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375340566Z" level=info msg="containerd successfully booted in 0.070853s"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.341207280Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.372935594Z" level=info msg="Loading containers: start."
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.662471377Z" level=info msg="Loading containers: done."
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686025529Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686394438Z" level=info msg="Daemon has completed initialization"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807251972Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 10:59:30 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807726683Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.294970724Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.296140626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:00:01 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.297893627Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298007127Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298131828Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:00:02 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:00:02 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:00:02 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.373783050Z" level=info msg="Starting up"
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.375739052Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.376681653Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.414401489Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443879217Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443976817Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444032617Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444054617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444082717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444097417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444314317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444420417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444442717Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444454017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444480517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444729817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448106421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448213221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448460321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448545421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448576221Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448595621Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448608321Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448970822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449301222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449419922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449439222Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449472722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449525422Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449797522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449993923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450015223Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450031323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450046523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450061223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450074823Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450089123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450104623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450119123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450132723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450147523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450169123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450195823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450213523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450228423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450242323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450317723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450340823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450355723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450370223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450386623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450404923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450419423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450433523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450450223Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450473323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450488823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450586723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450768623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450878823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450899223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450913423Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451074824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451245924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451269524Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451551924Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451703024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451799224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.452213625Z" level=info msg="containerd successfully booted in 0.040825s"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.418862644Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.441473165Z" level=info msg="Loading containers: start."
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.627479942Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.718102328Z" level=info msg="Loading containers: done."
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743113952Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743178652Z" level=info msg="Daemon has completed initialization"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.793711400Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.794898201Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:03 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 11:00:13 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.128331474Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134282479Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134684380Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134803580Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.135077080Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:00:14 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:00:14 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:00:14 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.213787206Z" level=info msg="Starting up"
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.215786608Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.223733215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.257297947Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285515974Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285568774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285610374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285627974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285654974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285669474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285807174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285907174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285969774Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285984174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286011074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286128374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289099977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289240777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289384778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289474878Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289505078Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289523778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289538678Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289665278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289753578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289782578Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289798778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289812978Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289861878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290650379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290847279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291305579Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291331579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291347879Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291388179Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291418680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291448580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291464880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291477580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291490180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291506680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291528980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291545880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291563580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291578680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291590680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291602780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291614280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291626880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291639680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291658480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291677280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291691980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291721380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291739980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291796480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291812480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291829380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291878580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291897480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291908880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291974180Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292217280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292341480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292357280Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293132581Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293277181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293335781Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293541382Z" level=info msg="containerd successfully booted in 0.037246s"
	Apr 29 11:00:15 functional-197400 dockerd[1330]: time="2024-04-29T11:00:15.277854617Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 11:00:17 functional-197400 dockerd[1330]: time="2024-04-29T11:00:17.927543836Z" level=info msg="Loading containers: start."
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.112045312Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.198094793Z" level=info msg="Loading containers: done."
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222645217Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222779217Z" level=info msg="Daemon has completed initialization"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274280866Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274456266Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:18 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120296911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120512729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120543432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120660941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.186893035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187185759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187211261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.188407762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215270831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215407743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215422644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215523352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280764062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280985280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281084889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281634035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643303177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643466691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643509895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643684609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.697670368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.707267679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708026943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708256862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784290483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784407793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784468198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784707718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.819747877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821078290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821252004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.826495047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985252797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985562604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985588805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985711908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068054169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068309474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068331475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068467778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166236144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166301345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166313646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166396847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.521616981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522347196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522579101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.523240714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895048895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895152197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895172797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895676508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984381216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984458818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984485818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984841526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.507103229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509692523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509830323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.510118922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.796842343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797484742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797645142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797880641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.234836529Z" level=info msg="ignoring event" container=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235676628Z" level=info msg="shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235735628Z" level=warning msg="cleaning up after shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235745428Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.451291296Z" level=info msg="ignoring event" container=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451669095Z" level=info msg="shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451851495Z" level=warning msg="cleaning up after shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451995494Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.234860092Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450791635Z" level=info msg="shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.451090435Z" level=info msg="ignoring event" container=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450876935Z" level=warning msg="cleaning up after shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.451747135Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.482934541Z" level=info msg="ignoring event" container=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.484895642Z" level=info msg="shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.485295742Z" level=info msg="ignoring event" container=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486342742Z" level=warning msg="cleaning up after shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486585842Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486559842Z" level=info msg="shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486853242Z" level=warning msg="cleaning up after shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486923642Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.494120344Z" level=info msg="ignoring event" container=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494771444Z" level=info msg="shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494880444Z" level=warning msg="cleaning up after shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494940744Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.507132346Z" level=info msg="ignoring event" container=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509010947Z" level=info msg="shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509090647Z" level=warning msg="cleaning up after shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509108047Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531751851Z" level=info msg="ignoring event" container=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531875751Z" level=info msg="ignoring event" container=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532003151Z" level=info msg="shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532109051Z" level=warning msg="cleaning up after shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532144051Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546567054Z" level=info msg="shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546687154Z" level=warning msg="cleaning up after shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546700554Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.551454855Z" level=info msg="ignoring event" container=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.552199755Z" level=info msg="shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.553996555Z" level=warning msg="cleaning up after shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.554987256Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567471558Z" level=info msg="ignoring event" container=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567533658Z" level=info msg="ignoring event" container=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567572058Z" level=info msg="ignoring event" container=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585709762Z" level=info msg="shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585772862Z" level=warning msg="cleaning up after shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585785062Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586016062Z" level=info msg="shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586066762Z" level=warning msg="cleaning up after shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586078062Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592597763Z" level=info msg="shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592801863Z" level=warning msg="cleaning up after shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592926563Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.596528564Z" level=info msg="ignoring event" container=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.596987364Z" level=info msg="shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597025164Z" level=warning msg="cleaning up after shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597035064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.780696301Z" level=warning msg="cleanup warnings time=\"2024-04-29T11:02:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.366929116Z" level=info msg="shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.368217817Z" level=warning msg="cleaning up after shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.369588017Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1330]: time="2024-04-29T11:02:53.370462217Z" level=info msg="ignoring event" container=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.334510807Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.391107616Z" level=info msg="ignoring event" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393713479Z" level=info msg="shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393802388Z" level=warning msg="cleaning up after shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393813489Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463540623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463722041Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463974967Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.464010370Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:02:59 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Consumed 6.178s CPU time.
	Apr 29 11:02:59 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:02:59 functional-197400 dockerd[4230]: time="2024-04-29T11:02:59.547648892Z" level=info msg="Starting up"
	Apr 29 11:03:59 functional-197400 dockerd[4230]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 11:03:59 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 10:59:29 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.267173170Z" level=info msg="Starting up"
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.268201295Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.269372823Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.307954249Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337171950Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337254152Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337340754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337376555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337555459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337709163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337903268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338009670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338032671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338045671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338138773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338687786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341822662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341930064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342068768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342160270Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342291773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342561779Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342706583Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372846706Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372975409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373003310Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373021510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373037211Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373149113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373464921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373719527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373825230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373848630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373863930Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373890031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373906532Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373921332Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373949133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373962633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373975833Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373987533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374008834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374023234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374037835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374051935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374065235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374078736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374091236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374105436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374119237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374134237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374146237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374159238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374171938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374188938Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374210239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374222939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374234739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374289741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374332042Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374348242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374360142Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374503946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374551147Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374567747Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374816253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374962657Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375258464Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375340566Z" level=info msg="containerd successfully booted in 0.070853s"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.341207280Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.372935594Z" level=info msg="Loading containers: start."
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.662471377Z" level=info msg="Loading containers: done."
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686025529Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686394438Z" level=info msg="Daemon has completed initialization"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807251972Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 10:59:30 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807726683Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.294970724Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.296140626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:00:01 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.297893627Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298007127Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298131828Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:00:02 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:00:02 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:00:02 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.373783050Z" level=info msg="Starting up"
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.375739052Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.376681653Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.414401489Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443879217Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443976817Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444032617Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444054617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444082717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444097417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444314317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444420417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444442717Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444454017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444480517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444729817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448106421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448213221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448460321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448545421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448576221Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448595621Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448608321Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448970822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449301222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449419922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449439222Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449472722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449525422Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449797522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449993923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450015223Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450031323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450046523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450061223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450074823Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450089123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450104623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450119123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450132723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450147523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450169123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450195823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450213523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450228423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450242323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450317723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450340823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450355723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450370223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450386623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450404923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450419423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450433523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450450223Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450473323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450488823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450586723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450768623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450878823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450899223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450913423Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451074824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451245924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451269524Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451551924Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451703024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451799224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.452213625Z" level=info msg="containerd successfully booted in 0.040825s"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.418862644Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.441473165Z" level=info msg="Loading containers: start."
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.627479942Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.718102328Z" level=info msg="Loading containers: done."
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743113952Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743178652Z" level=info msg="Daemon has completed initialization"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.793711400Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.794898201Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:03 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 11:00:13 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.128331474Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134282479Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134684380Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134803580Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.135077080Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:00:14 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:00:14 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:00:14 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.213787206Z" level=info msg="Starting up"
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.215786608Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.223733215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.257297947Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285515974Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285568774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285610374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285627974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285654974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285669474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285807174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285907174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285969774Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285984174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286011074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286128374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289099977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289240777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289384778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289474878Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289505078Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289523778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289538678Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289665278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289753578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289782578Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289798778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289812978Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289861878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290650379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290847279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291305579Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291331579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291347879Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291388179Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291418680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291448580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291464880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291477580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291490180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291506680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291528980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291545880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291563580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291578680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291590680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291602780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291614280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291626880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291639680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291658480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291677280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291691980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291721380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291739980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291796480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291812480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291829380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291878580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291897480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291908880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291974180Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292217280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292341480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292357280Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293132581Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293277181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293335781Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293541382Z" level=info msg="containerd successfully booted in 0.037246s"
	Apr 29 11:00:15 functional-197400 dockerd[1330]: time="2024-04-29T11:00:15.277854617Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 11:00:17 functional-197400 dockerd[1330]: time="2024-04-29T11:00:17.927543836Z" level=info msg="Loading containers: start."
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.112045312Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.198094793Z" level=info msg="Loading containers: done."
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222645217Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222779217Z" level=info msg="Daemon has completed initialization"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274280866Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274456266Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:18 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120296911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120512729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120543432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120660941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.186893035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187185759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187211261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.188407762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215270831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215407743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215422644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215523352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280764062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280985280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281084889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281634035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643303177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643466691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643509895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643684609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.697670368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.707267679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708026943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708256862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784290483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784407793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784468198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784707718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.819747877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821078290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821252004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.826495047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985252797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985562604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985588805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985711908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068054169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068309474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068331475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068467778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166236144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166301345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166313646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166396847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.521616981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522347196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522579101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.523240714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895048895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895152197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895172797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895676508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984381216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984458818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984485818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984841526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.507103229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509692523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509830323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.510118922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.796842343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797484742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797645142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797880641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.234836529Z" level=info msg="ignoring event" container=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235676628Z" level=info msg="shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235735628Z" level=warning msg="cleaning up after shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235745428Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.451291296Z" level=info msg="ignoring event" container=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451669095Z" level=info msg="shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451851495Z" level=warning msg="cleaning up after shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451995494Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.234860092Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450791635Z" level=info msg="shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.451090435Z" level=info msg="ignoring event" container=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450876935Z" level=warning msg="cleaning up after shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.451747135Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.482934541Z" level=info msg="ignoring event" container=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.484895642Z" level=info msg="shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.485295742Z" level=info msg="ignoring event" container=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486342742Z" level=warning msg="cleaning up after shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486585842Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486559842Z" level=info msg="shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486853242Z" level=warning msg="cleaning up after shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486923642Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.494120344Z" level=info msg="ignoring event" container=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494771444Z" level=info msg="shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494880444Z" level=warning msg="cleaning up after shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494940744Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.507132346Z" level=info msg="ignoring event" container=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509010947Z" level=info msg="shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509090647Z" level=warning msg="cleaning up after shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509108047Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531751851Z" level=info msg="ignoring event" container=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531875751Z" level=info msg="ignoring event" container=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532003151Z" level=info msg="shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532109051Z" level=warning msg="cleaning up after shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532144051Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546567054Z" level=info msg="shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546687154Z" level=warning msg="cleaning up after shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546700554Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.551454855Z" level=info msg="ignoring event" container=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.552199755Z" level=info msg="shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.553996555Z" level=warning msg="cleaning up after shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.554987256Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567471558Z" level=info msg="ignoring event" container=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567533658Z" level=info msg="ignoring event" container=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567572058Z" level=info msg="ignoring event" container=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585709762Z" level=info msg="shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585772862Z" level=warning msg="cleaning up after shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585785062Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586016062Z" level=info msg="shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586066762Z" level=warning msg="cleaning up after shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586078062Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592597763Z" level=info msg="shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592801863Z" level=warning msg="cleaning up after shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592926563Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.596528564Z" level=info msg="ignoring event" container=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.596987364Z" level=info msg="shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597025164Z" level=warning msg="cleaning up after shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597035064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.780696301Z" level=warning msg="cleanup warnings time=\"2024-04-29T11:02:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.366929116Z" level=info msg="shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.368217817Z" level=warning msg="cleaning up after shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.369588017Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1330]: time="2024-04-29T11:02:53.370462217Z" level=info msg="ignoring event" container=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.334510807Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.391107616Z" level=info msg="ignoring event" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393713479Z" level=info msg="shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393802388Z" level=warning msg="cleaning up after shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393813489Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463540623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463722041Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463974967Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.464010370Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:02:59 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Consumed 6.178s CPU time.
	Apr 29 11:02:59 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:02:59 functional-197400 dockerd[4230]: time="2024-04-29T11:02:59.547648892Z" level=info msg="Starting up"
	Apr 29 11:03:59 functional-197400 dockerd[4230]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 11:03:59 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 11:03:59.659798   13764 out.go:239] * 
	* 
	W0429 11:03:59.661047   13764 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 11:03:59.665567   13764 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-197400 --alsologtostderr -v=8": exit status 90
functional_test.go:659: soft start took 2m30.0096966s for "functional-197400" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-197400 -n functional-197400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-197400 -n functional-197400: exit status 2 (11.7105991s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:04:00.390756    2044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 logs -n 25: (1m48.4689597s)
helpers_test.go:252: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ip      | addons-839400 ip                                                      | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:48 UTC | 29 Apr 24 10:48 UTC |
	| addons  | addons-839400 addons disable                                          | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:48 UTC | 29 Apr 24 10:48 UTC |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| addons  | addons-839400 addons disable                                          | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:48 UTC | 29 Apr 24 10:49 UTC |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |         |                     |                     |
	| addons  | addons-839400 addons disable                                          | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:49 UTC | 29 Apr 24 10:49 UTC |
	|         | gcp-auth --alsologtostderr                                            |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| stop    | -p addons-839400                                                      | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:49 UTC | 29 Apr 24 10:50 UTC |
	| addons  | enable dashboard -p                                                   | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:50 UTC | 29 Apr 24 10:50 UTC |
	|         | addons-839400                                                         |                   |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:50 UTC | 29 Apr 24 10:50 UTC |
	|         | addons-839400                                                         |                   |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:50 UTC | 29 Apr 24 10:50 UTC |
	|         | addons-839400                                                         |                   |                   |         |                     |                     |
	| delete  | -p addons-839400                                                      | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:50 UTC | 29 Apr 24 10:51 UTC |
	| start   | -p nospam-205500 -n=1 --memory=2250 --wait=false                      | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:51 UTC | 29 Apr 24 10:54 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| start   | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:54 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:54 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:54 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| pause   | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:55 UTC | 29 Apr 24 10:55 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:55 UTC | 29 Apr 24 10:55 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:55 UTC | 29 Apr 24 10:55 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| unpause | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:55 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| stop    | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:57 UTC | 29 Apr 24 10:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| delete  | -p nospam-205500                                                      | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:57 UTC | 29 Apr 24 10:57 UTC |
	| start   | -p functional-197400                                                  | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:57 UTC | 29 Apr 24 11:01 UTC |
	|         | --memory=4000                                                         |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |         |                     |                     |
	| start   | -p functional-197400                                                  | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:01 UTC |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:01:30
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:01:30.445059   13764 out.go:291] Setting OutFile to fd 884 ...
	I0429 11:01:30.445789   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:01:30.445789   13764 out.go:304] Setting ErrFile to fd 280...
	I0429 11:01:30.445789   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:01:30.469783   13764 out.go:298] Setting JSON to false
	I0429 11:01:30.474075   13764 start.go:129] hostinfo: {"hostname":"minikube6","uptime":29963,"bootTime":1714358527,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 11:01:30.474075   13764 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 11:01:30.478082   13764 out.go:177] * [functional-197400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 11:01:30.484053   13764 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:01:30.482999   13764 notify.go:220] Checking for updates...
	I0429 11:01:30.487059   13764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:01:30.489426   13764 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 11:01:30.492314   13764 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 11:01:30.494672   13764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:01:30.497561   13764 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:01:30.498504   13764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:01:35.797581   13764 out.go:177] * Using the hyperv driver based on existing profile
	I0429 11:01:35.800821   13764 start.go:297] selected driver: hyperv
	I0429 11:01:35.800821   13764 start.go:901] validating driver "hyperv" against &{Name:functional-197400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:functional-197400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.82 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:01:35.800821   13764 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 11:01:35.854447   13764 cni.go:84] Creating CNI manager for ""
	I0429 11:01:35.854447   13764 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 11:01:35.855168   13764 start.go:340] cluster config:
	{Name:functional-197400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-197400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.82 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:01:35.855712   13764 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:01:35.860024   13764 out.go:177] * Starting "functional-197400" primary control-plane node in "functional-197400" cluster
	I0429 11:01:35.862486   13764 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 11:01:35.862966   13764 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 11:01:35.862966   13764 cache.go:56] Caching tarball of preloaded images
	I0429 11:01:35.863088   13764 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 11:01:35.863509   13764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 11:01:35.863697   13764 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\config.json ...
	I0429 11:01:35.865973   13764 start.go:360] acquireMachinesLock for functional-197400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 11:01:35.865973   13764 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-197400"
	I0429 11:01:35.865973   13764 start.go:96] Skipping create...Using existing machine configuration
	I0429 11:01:35.866728   13764 fix.go:54] fixHost starting: 
	I0429 11:01:35.866814   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:38.565164   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:38.566072   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:38.566072   13764 fix.go:112] recreateIfNeeded on functional-197400: state=Running err=<nil>
	W0429 11:01:38.566163   13764 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 11:01:38.570099   13764 out.go:177] * Updating the running hyperv "functional-197400" VM ...
	I0429 11:01:38.572589   13764 machine.go:94] provisionDockerMachine start ...
	I0429 11:01:38.572790   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:40.728211   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:40.729260   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:40.729260   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:43.337044   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:43.338056   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:43.344719   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:43.344884   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:43.344884   13764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 11:01:43.492864   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-197400
	
	I0429 11:01:43.493032   13764 buildroot.go:166] provisioning hostname "functional-197400"
	I0429 11:01:43.493146   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:45.594418   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:45.594418   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:45.595027   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:48.145598   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:48.145598   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:48.153963   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:48.154713   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:48.154713   13764 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-197400 && echo "functional-197400" | sudo tee /etc/hostname
	I0429 11:01:48.322635   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-197400
	
	I0429 11:01:48.322635   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:50.425088   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:50.425088   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:50.426116   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:52.996130   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:52.996130   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:53.002862   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:53.003355   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:53.003457   13764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-197400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-197400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-197400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:01:53.146326   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:01:53.146326   13764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 11:01:53.146326   13764 buildroot.go:174] setting up certificates
	I0429 11:01:53.146326   13764 provision.go:84] configureAuth start
	I0429 11:01:53.146326   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:57.763195   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:57.763363   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:57.763439   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:59.852676   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:59.852676   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:59.853320   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:02.368053   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:02.368053   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:02.368674   13764 provision.go:143] copyHostCerts
	I0429 11:02:02.369074   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 11:02:02.369383   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 11:02:02.369383   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 11:02:02.369931   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 11:02:02.370685   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 11:02:02.370685   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 11:02:02.370685   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 11:02:02.371650   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 11:02:02.372440   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 11:02:02.372519   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 11:02:02.372519   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 11:02:02.373046   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 11:02:02.374016   13764 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-197400 san=[127.0.0.1 172.26.179.82 functional-197400 localhost minikube]
	I0429 11:02:02.495876   13764 provision.go:177] copyRemoteCerts
	I0429 11:02:02.510020   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:02:02.510020   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:04.618809   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:04.619542   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:04.619542   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:07.167725   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:07.167725   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:07.168803   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:07.282611   13764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7725535s)
	I0429 11:02:07.282611   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 11:02:07.282611   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 11:02:07.334346   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 11:02:07.334955   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 11:02:07.390221   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 11:02:07.391689   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 11:02:07.447983   13764 provision.go:87] duration metric: took 14.3015428s to configureAuth
	I0429 11:02:07.448063   13764 buildroot.go:189] setting minikube options for container-runtime
	I0429 11:02:07.448063   13764 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:02:07.448747   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:09.549776   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:09.549776   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:09.550299   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:12.117228   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:12.117228   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:12.123983   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:12.124562   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:12.124562   13764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 11:02:12.266791   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 11:02:12.267014   13764 buildroot.go:70] root file system type: tmpfs
	I0429 11:02:12.267189   13764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 11:02:12.267262   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:14.408118   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:14.408560   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:14.408560   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:16.960938   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:16.961202   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:16.967669   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:16.968259   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:16.968427   13764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 11:02:17.143647   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 11:02:17.143855   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:21.747577   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:21.747577   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:21.755006   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:21.755589   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:21.755589   13764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 11:02:21.897946   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:02:21.897946   13764 machine.go:97] duration metric: took 43.3250104s to provisionDockerMachine
	I0429 11:02:21.897946   13764 start.go:293] postStartSetup for "functional-197400" (driver="hyperv")
	I0429 11:02:21.897946   13764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:02:21.911428   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:02:21.911428   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:26.501393   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:26.501393   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:26.502118   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:26.619226   13764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7077235s)
	I0429 11:02:26.634064   13764 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:02:26.641916   13764 command_runner.go:130] > NAME=Buildroot
	I0429 11:02:26.641916   13764 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 11:02:26.641916   13764 command_runner.go:130] > ID=buildroot
	I0429 11:02:26.641916   13764 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 11:02:26.641916   13764 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 11:02:26.641916   13764 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 11:02:26.641916   13764 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 11:02:26.642478   13764 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 11:02:26.643334   13764 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 11:02:26.643334   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 11:02:26.644676   13764 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts -> hosts in /etc/test/nested/copy/8496
	I0429 11:02:26.644676   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts -> /etc/test/nested/copy/8496/hosts
	I0429 11:02:26.657704   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8496
	I0429 11:02:26.682055   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 11:02:26.741547   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts --> /etc/test/nested/copy/8496/hosts (40 bytes)
	I0429 11:02:26.792541   13764 start.go:296] duration metric: took 4.8945563s for postStartSetup
	I0429 11:02:26.792541   13764 fix.go:56] duration metric: took 50.9254062s for fixHost
	I0429 11:02:26.792541   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:31.379441   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:31.379441   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:31.385529   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:31.385992   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:31.385992   13764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 11:02:31.514751   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714388551.520714576
	
	I0429 11:02:31.514751   13764 fix.go:216] guest clock: 1714388551.520714576
	I0429 11:02:31.514751   13764 fix.go:229] Guest: 2024-04-29 11:02:31.520714576 +0000 UTC Remote: 2024-04-29 11:02:26.7925417 +0000 UTC m=+56.526311901 (delta=4.728172876s)
	I0429 11:02:31.514751   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:33.581114   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:33.581995   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:33.581995   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:36.123230   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:36.123230   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:36.130279   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:36.131025   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:36.131025   13764 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714388551
	I0429 11:02:36.291751   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 11:02:31 UTC 2024
	
	I0429 11:02:36.291751   13764 fix.go:236] clock set: Mon Apr 29 11:02:31 UTC 2024
	 (err=<nil>)
	I0429 11:02:36.291751   13764 start.go:83] releasing machines lock for "functional-197400", held for 1m0.4252951s
	I0429 11:02:36.291751   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:38.419288   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:38.419288   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:38.419682   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:40.996072   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:40.996072   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:41.001337   13764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:02:41.001536   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:41.013399   13764 ssh_runner.go:195] Run: cat /version.json
	I0429 11:02:41.013399   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:43.158321   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:43.158321   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:43.159330   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:45.835688   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:45.836385   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:45.836904   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:45.861347   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:45.861347   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:45.862776   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:45.935735   13764 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 11:02:45.936039   13764 ssh_runner.go:235] Completed: cat /version.json: (4.9226007s)
	I0429 11:02:45.950826   13764 ssh_runner.go:195] Run: systemctl --version
	I0429 11:02:46.011745   13764 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 11:02:46.011850   13764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0103766s)
	I0429 11:02:46.011850   13764 command_runner.go:130] > systemd 252 (252)
	I0429 11:02:46.011999   13764 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 11:02:46.026211   13764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 11:02:46.035440   13764 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 11:02:46.035904   13764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 11:02:46.048490   13764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:02:46.067930   13764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 11:02:46.067930   13764 start.go:494] detecting cgroup driver to use...
	I0429 11:02:46.068188   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:02:46.104796   13764 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 11:02:46.118218   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 11:02:46.152176   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 11:02:46.174564   13764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 11:02:46.187378   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 11:02:46.221768   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:02:46.255412   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 11:02:46.290318   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:02:46.325497   13764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:02:46.367045   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 11:02:46.403208   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 11:02:46.442281   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 11:02:46.478926   13764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:02:46.499867   13764 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 11:02:46.513297   13764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:02:46.549431   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:02:46.855826   13764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 11:02:46.905389   13764 start.go:494] detecting cgroup driver to use...
	I0429 11:02:46.922503   13764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 11:02:46.951373   13764 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 11:02:46.951373   13764 command_runner.go:130] > [Unit]
	I0429 11:02:46.951373   13764 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 11:02:46.951373   13764 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 11:02:46.951373   13764 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 11:02:46.951470   13764 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 11:02:46.951470   13764 command_runner.go:130] > StartLimitBurst=3
	I0429 11:02:46.951470   13764 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 11:02:46.951470   13764 command_runner.go:130] > [Service]
	I0429 11:02:46.951507   13764 command_runner.go:130] > Type=notify
	I0429 11:02:46.951507   13764 command_runner.go:130] > Restart=on-failure
	I0429 11:02:46.951507   13764 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 11:02:46.951552   13764 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 11:02:46.951552   13764 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 11:02:46.951643   13764 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 11:02:46.951643   13764 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 11:02:46.951643   13764 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 11:02:46.951687   13764 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 11:02:46.951727   13764 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 11:02:46.951727   13764 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 11:02:46.951727   13764 command_runner.go:130] > ExecStart=
	I0429 11:02:46.951791   13764 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 11:02:46.951838   13764 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 11:02:46.951838   13764 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 11:02:46.951838   13764 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 11:02:46.951838   13764 command_runner.go:130] > LimitNOFILE=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > LimitNPROC=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > LimitCORE=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 11:02:46.951896   13764 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 11:02:46.951939   13764 command_runner.go:130] > TasksMax=infinity
	I0429 11:02:46.951939   13764 command_runner.go:130] > TimeoutStartSec=0
	I0429 11:02:46.951939   13764 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 11:02:46.951939   13764 command_runner.go:130] > Delegate=yes
	I0429 11:02:46.951939   13764 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 11:02:46.952000   13764 command_runner.go:130] > KillMode=process
	I0429 11:02:46.952000   13764 command_runner.go:130] > [Install]
	I0429 11:02:46.952000   13764 command_runner.go:130] > WantedBy=multi-user.target
	I0429 11:02:46.966498   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:02:47.010945   13764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:02:47.071693   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:02:47.111019   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:02:47.138047   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:02:47.173728   13764 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 11:02:47.188143   13764 ssh_runner.go:195] Run: which cri-dockerd
	I0429 11:02:47.196459   13764 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 11:02:47.211733   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 11:02:47.232274   13764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 11:02:47.282245   13764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 11:02:47.579073   13764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 11:02:47.847228   13764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 11:02:47.847310   13764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 11:02:47.911078   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:02:48.205114   13764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 11:03:59.569091   13764 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0429 11:03:59.569139   13764 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0429 11:03:59.569659   13764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3639778s)
	I0429 11:03:59.583436   13764 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.267173170Z" level=info msg="Starting up"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.268201295Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.269372823Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.307954249Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337171950Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337254152Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337340754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337376555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337555459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337709163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337903268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338009670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338032671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338045671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338138773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338687786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341822662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.617057   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341930064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.617127   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342068768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.617167   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342160270Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.617214   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342291773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.617232   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342561779Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.617269   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342706583Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.617269   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372846706Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.617329   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372975409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.617329   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373003310Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.617382   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373021510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.617440   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373037211Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.617474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373149113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.617531   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373464921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.617531   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373719527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.617565   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373825230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373848630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373863930Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373890031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373906532Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373921332Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373949133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373962633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373975833Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373987533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374008834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374023234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374037835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374051935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374065235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374078736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374091236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374105436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374119237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374134237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374146237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374159238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374171938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374188938Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374210239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374222939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374234739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618212   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374289741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.618255   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374332042Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374348242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374360142Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374503946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374551147Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374567747Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374816253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374962657Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375258464Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375340566Z" level=info msg="containerd successfully booted in 0.070853s"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.341207280Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.372935594Z" level=info msg="Loading containers: start."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.662471377Z" level=info msg="Loading containers: done."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686025529Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686394438Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807251972Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807726683Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.294970724Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.296140626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.297893627Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298007127Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298131828Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.373783050Z" level=info msg="Starting up"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.375739052Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.376681653Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	I0429 11:03:59.618824   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.414401489Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.618889   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443879217Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.618937   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443976817Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618937   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444032617Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.618988   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444054617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619010   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444082717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619060   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444097417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619078   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444314317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619078   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444420417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619155   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444442717Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444454017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444480517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444729817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448106421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448213221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448460321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448545421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448576221Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448595621Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448608321Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448970822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449301222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449419922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449439222Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449472722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449525422Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449797522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449993923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450015223Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450031323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450046523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450061223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450074823Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450089123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450104623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450119123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450132723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619877   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450147523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619877   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450169123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.619952   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450195823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.619975   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450213523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450228423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450242323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450317723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450340823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450355723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450370223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450386623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450404923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450419423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450433523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450450223Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450473323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450488823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450586723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450768623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450878823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450899223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450913423Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451074824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451245924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451269524Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451551924Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451703024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451799224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.452213625Z" level=info msg="containerd successfully booted in 0.040825s"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.418862644Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.441473165Z" level=info msg="Loading containers: start."
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.627479942Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.718102328Z" level=info msg="Loading containers: done."
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743113952Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743178652Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.793711400Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.794898201Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.128331474Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134282479Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134684380Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134803580Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.135077080Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.213787206Z" level=info msg="Starting up"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.215786608Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.223733215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.257297947Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285515974Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285568774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285610374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285627974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285654974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285669474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285807174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285907174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285969774Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285984174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286011074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286128374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289099977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289240777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289384778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289474878Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289505078Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289523778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289538678Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289665278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289753578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289782578Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289798778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289812978Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289861878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290650379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290847279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.621634   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291305579Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291331579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291347879Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291388179Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291418680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291448580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291464880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291477580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291490180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291506680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291528980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291545880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291563580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291578680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291590680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291602780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291614280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291626880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291639680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291658480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291677280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291691980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291721380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291739980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291796480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291812480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291829380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291878580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291897480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291908880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291974180Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292217280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292341480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292357280Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293132581Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293277181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293335781Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293541382Z" level=info msg="containerd successfully booted in 0.037246s"
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:15 functional-197400 dockerd[1330]: time="2024-04-29T11:00:15.277854617Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:17 functional-197400 dockerd[1330]: time="2024-04-29T11:00:17.927543836Z" level=info msg="Loading containers: start."
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.112045312Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.198094793Z" level=info msg="Loading containers: done."
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222645217Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222779217Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274280866Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274456266Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120296911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120512729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120543432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120660941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.186893035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187185759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187211261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.188407762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215270831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215407743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623152   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215422644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623152   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215523352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623248   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280764062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280985280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281084889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281634035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643303177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643466691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643509895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643684609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.697670368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.707267679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708026943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708256862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784290483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784407793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784468198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784707718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.819747877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821078290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821252004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.826495047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985252797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985562604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985588805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985711908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068054169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068309474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068331475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068467778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166236144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166301345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166313646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166396847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.521616981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522347196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522579101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.523240714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895048895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895152197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895172797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624251   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895676508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624251   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984381216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984458818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984485818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984841526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.507103229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509692523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509830323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.510118922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.796842343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797484742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797645142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797880641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.234836529Z" level=info msg="ignoring event" container=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235676628Z" level=info msg="shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235735628Z" level=warning msg="cleaning up after shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235745428Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.451291296Z" level=info msg="ignoring event" container=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451669095Z" level=info msg="shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451851495Z" level=warning msg="cleaning up after shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451995494Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.234860092Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450791635Z" level=info msg="shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.451090435Z" level=info msg="ignoring event" container=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450876935Z" level=warning msg="cleaning up after shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.451747135Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.482934541Z" level=info msg="ignoring event" container=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.484895642Z" level=info msg="shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.485295742Z" level=info msg="ignoring event" container=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486342742Z" level=warning msg="cleaning up after shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486585842Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486559842Z" level=info msg="shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486853242Z" level=warning msg="cleaning up after shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	I0429 11:03:59.625290   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486923642Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625290   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.494120344Z" level=info msg="ignoring event" container=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625385   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494771444Z" level=info msg="shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	I0429 11:03:59.625464   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494880444Z" level=warning msg="cleaning up after shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	I0429 11:03:59.625464   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494940744Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.507132346Z" level=info msg="ignoring event" container=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509010947Z" level=info msg="shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509090647Z" level=warning msg="cleaning up after shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	I0429 11:03:59.625678   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509108047Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625678   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531751851Z" level=info msg="ignoring event" container=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531875751Z" level=info msg="ignoring event" container=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532003151Z" level=info msg="shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532109051Z" level=warning msg="cleaning up after shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532144051Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546567054Z" level=info msg="shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546687154Z" level=warning msg="cleaning up after shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546700554Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.551454855Z" level=info msg="ignoring event" container=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.552199755Z" level=info msg="shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.553996555Z" level=warning msg="cleaning up after shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.554987256Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567471558Z" level=info msg="ignoring event" container=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567533658Z" level=info msg="ignoring event" container=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567572058Z" level=info msg="ignoring event" container=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585709762Z" level=info msg="shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585772862Z" level=warning msg="cleaning up after shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585785062Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586016062Z" level=info msg="shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586066762Z" level=warning msg="cleaning up after shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586078062Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592597763Z" level=info msg="shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592801863Z" level=warning msg="cleaning up after shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592926563Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.596528564Z" level=info msg="ignoring event" container=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.596987364Z" level=info msg="shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597025164Z" level=warning msg="cleaning up after shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597035064Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.780696301Z" level=warning msg="cleanup warnings time=\"2024-04-29T11:02:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.366929116Z" level=info msg="shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.368217817Z" level=warning msg="cleaning up after shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.369588017Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1330]: time="2024-04-29T11:02:53.370462217Z" level=info msg="ignoring event" container=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.334510807Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.391107616Z" level=info msg="ignoring event" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393713479Z" level=info msg="shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393802388Z" level=warning msg="cleaning up after shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393813489Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626648   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463540623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.626648   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463722041Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463974967Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.464010370Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Consumed 6.178s CPU time.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 dockerd[4230]: time="2024-04-29T11:02:59.547648892Z" level=info msg="Starting up"
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 dockerd[4230]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	I0429 11:03:59.655510   13764 out.go:177] 
	W0429 11:03:59.658137   13764 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 10:59:29 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.267173170Z" level=info msg="Starting up"
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.268201295Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.269372823Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.307954249Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337171950Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337254152Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337340754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337376555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337555459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337709163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337903268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338009670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338032671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338045671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338138773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338687786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341822662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341930064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342068768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342160270Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342291773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342561779Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342706583Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372846706Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372975409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373003310Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373021510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373037211Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373149113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373464921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373719527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373825230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373848630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373863930Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373890031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373906532Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373921332Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373949133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373962633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373975833Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373987533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374008834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374023234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374037835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374051935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374065235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374078736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374091236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374105436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374119237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374134237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374146237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374159238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374171938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374188938Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374210239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374222939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374234739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374289741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374332042Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374348242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374360142Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374503946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374551147Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374567747Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374816253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374962657Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375258464Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375340566Z" level=info msg="containerd successfully booted in 0.070853s"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.341207280Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.372935594Z" level=info msg="Loading containers: start."
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.662471377Z" level=info msg="Loading containers: done."
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686025529Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686394438Z" level=info msg="Daemon has completed initialization"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807251972Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 10:59:30 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807726683Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.294970724Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.296140626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:00:01 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.297893627Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298007127Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298131828Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:00:02 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:00:02 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:00:02 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.373783050Z" level=info msg="Starting up"
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.375739052Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.376681653Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.414401489Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443879217Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443976817Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444032617Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444054617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444082717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444097417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444314317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444420417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444442717Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444454017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444480517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444729817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448106421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448213221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448460321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448545421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448576221Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448595621Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448608321Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448970822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449301222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449419922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449439222Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449472722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449525422Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449797522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449993923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450015223Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450031323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450046523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450061223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450074823Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450089123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450104623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450119123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450132723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450147523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450169123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450195823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450213523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450228423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450242323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450317723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450340823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450355723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450370223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450386623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450404923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450419423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450433523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450450223Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450473323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450488823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450586723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450768623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450878823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450899223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450913423Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451074824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451245924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451269524Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451551924Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451703024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451799224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.452213625Z" level=info msg="containerd successfully booted in 0.040825s"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.418862644Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.441473165Z" level=info msg="Loading containers: start."
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.627479942Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.718102328Z" level=info msg="Loading containers: done."
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743113952Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743178652Z" level=info msg="Daemon has completed initialization"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.793711400Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.794898201Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:03 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 11:00:13 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.128331474Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134282479Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134684380Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134803580Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.135077080Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:00:14 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:00:14 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:00:14 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.213787206Z" level=info msg="Starting up"
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.215786608Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.223733215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.257297947Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285515974Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285568774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285610374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285627974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285654974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285669474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285807174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285907174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285969774Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285984174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286011074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286128374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289099977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289240777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289384778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289474878Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289505078Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289523778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289538678Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289665278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289753578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289782578Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289798778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289812978Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289861878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290650379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290847279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291305579Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291331579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291347879Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291388179Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291418680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291448580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291464880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291477580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291490180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291506680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291528980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291545880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291563580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291578680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291590680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291602780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291614280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291626880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291639680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291658480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291677280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291691980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291721380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291739980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291796480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291812480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291829380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291878580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291897480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291908880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291974180Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292217280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292341480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292357280Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293132581Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293277181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293335781Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293541382Z" level=info msg="containerd successfully booted in 0.037246s"
	Apr 29 11:00:15 functional-197400 dockerd[1330]: time="2024-04-29T11:00:15.277854617Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 11:00:17 functional-197400 dockerd[1330]: time="2024-04-29T11:00:17.927543836Z" level=info msg="Loading containers: start."
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.112045312Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.198094793Z" level=info msg="Loading containers: done."
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222645217Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222779217Z" level=info msg="Daemon has completed initialization"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274280866Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274456266Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:18 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120296911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120512729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120543432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120660941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.186893035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187185759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187211261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.188407762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215270831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215407743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215422644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215523352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280764062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280985280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281084889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281634035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643303177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643466691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643509895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643684609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.697670368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.707267679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708026943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708256862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784290483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784407793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784468198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784707718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.819747877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821078290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821252004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.826495047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985252797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985562604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985588805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985711908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068054169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068309474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068331475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068467778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166236144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166301345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166313646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166396847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.521616981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522347196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522579101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.523240714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895048895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895152197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895172797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895676508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984381216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984458818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984485818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984841526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.507103229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509692523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509830323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.510118922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.796842343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797484742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797645142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797880641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.234836529Z" level=info msg="ignoring event" container=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235676628Z" level=info msg="shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235735628Z" level=warning msg="cleaning up after shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235745428Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.451291296Z" level=info msg="ignoring event" container=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451669095Z" level=info msg="shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451851495Z" level=warning msg="cleaning up after shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451995494Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.234860092Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450791635Z" level=info msg="shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.451090435Z" level=info msg="ignoring event" container=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450876935Z" level=warning msg="cleaning up after shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.451747135Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.482934541Z" level=info msg="ignoring event" container=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.484895642Z" level=info msg="shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.485295742Z" level=info msg="ignoring event" container=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486342742Z" level=warning msg="cleaning up after shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486585842Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486559842Z" level=info msg="shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486853242Z" level=warning msg="cleaning up after shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486923642Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.494120344Z" level=info msg="ignoring event" container=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494771444Z" level=info msg="shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494880444Z" level=warning msg="cleaning up after shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494940744Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.507132346Z" level=info msg="ignoring event" container=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509010947Z" level=info msg="shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509090647Z" level=warning msg="cleaning up after shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509108047Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531751851Z" level=info msg="ignoring event" container=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531875751Z" level=info msg="ignoring event" container=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532003151Z" level=info msg="shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532109051Z" level=warning msg="cleaning up after shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532144051Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546567054Z" level=info msg="shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546687154Z" level=warning msg="cleaning up after shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546700554Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.551454855Z" level=info msg="ignoring event" container=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.552199755Z" level=info msg="shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.553996555Z" level=warning msg="cleaning up after shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.554987256Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567471558Z" level=info msg="ignoring event" container=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567533658Z" level=info msg="ignoring event" container=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567572058Z" level=info msg="ignoring event" container=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585709762Z" level=info msg="shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585772862Z" level=warning msg="cleaning up after shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585785062Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586016062Z" level=info msg="shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586066762Z" level=warning msg="cleaning up after shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586078062Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592597763Z" level=info msg="shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592801863Z" level=warning msg="cleaning up after shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592926563Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.596528564Z" level=info msg="ignoring event" container=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.596987364Z" level=info msg="shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597025164Z" level=warning msg="cleaning up after shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597035064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.780696301Z" level=warning msg="cleanup warnings time=\"2024-04-29T11:02:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.366929116Z" level=info msg="shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.368217817Z" level=warning msg="cleaning up after shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.369588017Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1330]: time="2024-04-29T11:02:53.370462217Z" level=info msg="ignoring event" container=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.334510807Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.391107616Z" level=info msg="ignoring event" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393713479Z" level=info msg="shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393802388Z" level=warning msg="cleaning up after shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393813489Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463540623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463722041Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463974967Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.464010370Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:02:59 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Consumed 6.178s CPU time.
	Apr 29 11:02:59 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:02:59 functional-197400 dockerd[4230]: time="2024-04-29T11:02:59.547648892Z" level=info msg="Starting up"
	Apr 29 11:03:59 functional-197400 dockerd[4230]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 11:03:59 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 11:03:59.659798   13764 out.go:239] * 
	W0429 11:03:59.661047   13764 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 11:03:59.665567   13764 out.go:177] 
	
	
	==> Docker <==
	Apr 29 11:03:59 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:03:59 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:03:59 functional-197400 dockerd[4440]: time="2024-04-29T11:03:59.763369694Z" level=info msg="Starting up"
	Apr 29 11:04:59 functional-197400 dockerd[4440]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 29 11:04:59 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 11:04:59 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 11:04:59 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="error getting RW layer size for container ID '3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf'"
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="error getting RW layer size for container ID '1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb'"
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="error getting RW layer size for container ID '484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e'"
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="error getting RW layer size for container ID '4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523'"
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="error getting RW layer size for container ID '4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b'"
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="error getting RW layer size for container ID '02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID '02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006'"
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="error getting RW layer size for container ID 'd25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:04:59 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:04:59Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e'"
	Apr 29 11:05:00 functional-197400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 29 11:05:00 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:05:00 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-29T11:05:02Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.105646] kauditd_printk_skb: 59 callbacks suppressed
	[Apr29 11:00] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
	[  +0.209517] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +0.250233] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +2.826178] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.204930] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.214674] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.298087] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +8.281397] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.104642] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.002494] kauditd_printk_skb: 34 callbacks suppressed
	[  +0.574798] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.643589] systemd-fstab-generator[1722]: Ignoring "noauto" option for root device
	[  +0.110283] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.558246] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
	[  +0.179717] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.880707] systemd-fstab-generator[2345]: Ignoring "noauto" option for root device
	[  +0.211200] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.322426] kauditd_printk_skb: 88 callbacks suppressed
	[Apr29 11:01] kauditd_printk_skb: 10 callbacks suppressed
	[Apr29 11:02] systemd-fstab-generator[3767]: Ignoring "noauto" option for root device
	[  +0.708559] systemd-fstab-generator[3803]: Ignoring "noauto" option for root device
	[  +0.292965] systemd-fstab-generator[3815]: Ignoring "noauto" option for root device
	[  +0.339194] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +5.346272] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 11:06:00 up 7 min,  0 users,  load average: 0.02, 0.18, 0.11
	Linux functional-197400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 29 11:05:51 functional-197400 kubelet[2131]: E0429 11:05:51.680120    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:05:51 functional-197400 kubelet[2131]: E0429 11:05:51.680217    2131 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 29 11:05:52 functional-197400 kubelet[2131]: E0429 11:05:52.049508    2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused" interval="7s"
	Apr 29 11:05:54 functional-197400 kubelet[2131]: I0429 11:05:54.427546    2131 status_manager.go:853] "Failed to get status for pod" podUID="3b208ed450e2701a29ea259268f7cae7" pod="kube-system/kube-apiserver-functional-197400" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-197400\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:05:55 functional-197400 kubelet[2131]: E0429 11:05:55.908426    2131 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m8.031506586s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 29 11:05:59 functional-197400 kubelet[2131]: E0429 11:05:59.051453    2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused" interval="7s"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.153861    2131 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.153960    2131 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.154007    2131 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.154024    2131 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.156371    2131 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.156405    2131 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.156472    2131 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.156511    2131 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: I0429 11:06:00.156525    2131 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.156600    2131 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.156618    2131 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: I0429 11:06:00.156629    2131 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.156651    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.156675    2131 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.157682    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.157750    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.158885    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.158950    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 29 11:06:00 functional-197400 kubelet[2131]: E0429 11:06:00.159161    2131 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:04:12.094211    1192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0429 11:04:59.787949    1192 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:04:59.824466    1192 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:04:59.853652    1192 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:04:59.882655    1192 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:04:59.912350    1192 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:04:59.940383    1192 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:04:59.968597    1192 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:04:59.996674    1192 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-197400 -n functional-197400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-197400 -n functional-197400: exit status 2 (11.9350999s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:06:01.092370    3420 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-197400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (282.63s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (180.55s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-197400 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-197400 get po -A: exit status 1 (10.403821s)

                                                
                                                
** stderr ** 
	E0429 11:06:15.192922    2640 memcache.go:265] couldn't get current server API group list: Get "https://172.26.179.82:8441/api?timeout=32s": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0429 11:06:17.319112    2640 memcache.go:265] couldn't get current server API group list: Get "https://172.26.179.82:8441/api?timeout=32s": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0429 11:06:19.363781    2640 memcache.go:265] couldn't get current server API group list: Get "https://172.26.179.82:8441/api?timeout=32s": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0429 11:06:21.391343    2640 memcache.go:265] couldn't get current server API group list: Get "https://172.26.179.82:8441/api?timeout=32s": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0429 11:06:23.425644    2640 memcache.go:265] couldn't get current server API group list: Get "https://172.26.179.82:8441/api?timeout=32s": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-197400 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"E0429 11:06:15.192922    2640 memcache.go:265] couldn't get current server API group list: Get \"https://172.26.179.82:8441/api?timeout=32s\": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.\nE0429 11:06:17.319112    2640 memcache.go:265] couldn't get current server API group list: Get \"https://172.26.179.82:8441/api?timeout=32s\": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.\nE0429 11:06:19.363781    2640 memcache.go:265] couldn't get current server API group list: Get \"https://172.26.179.82:8441/api?timeout=32s\": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.\nE0429 11:06:21.391343    2640 memcache.go:265] couldn't get current server API group list: Get \"https://172.26.179.82:8441/api?timeout=32s\": dial tcp 172.26.179.82:8441: connec
tex: No connection could be made because the target machine actively refused it.\nE0429 11:06:23.425644    2640 memcache.go:265] couldn't get current server API group list: Get \"https://172.26.179.82:8441/api?timeout=32s\": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.\nUnable to connect to the server: dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.\n"*: args "kubectl --context functional-197400 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-197400 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-197400 -n functional-197400
E0429 11:06:27.432180    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-197400 -n functional-197400: exit status 2 (11.5458446s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:06:23.561619    8828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 logs -n 25
E0429 11:07:50.634664    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 logs -n 25: (2m26.220436s)
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ip      | addons-839400 ip                                                      | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:48 UTC | 29 Apr 24 10:48 UTC |
	| addons  | addons-839400 addons disable                                          | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:48 UTC | 29 Apr 24 10:48 UTC |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| addons  | addons-839400 addons disable                                          | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:48 UTC | 29 Apr 24 10:49 UTC |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |         |                     |                     |
	| addons  | addons-839400 addons disable                                          | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:49 UTC | 29 Apr 24 10:49 UTC |
	|         | gcp-auth --alsologtostderr                                            |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| stop    | -p addons-839400                                                      | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:49 UTC | 29 Apr 24 10:50 UTC |
	| addons  | enable dashboard -p                                                   | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:50 UTC | 29 Apr 24 10:50 UTC |
	|         | addons-839400                                                         |                   |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:50 UTC | 29 Apr 24 10:50 UTC |
	|         | addons-839400                                                         |                   |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:50 UTC | 29 Apr 24 10:50 UTC |
	|         | addons-839400                                                         |                   |                   |         |                     |                     |
	| delete  | -p addons-839400                                                      | addons-839400     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:50 UTC | 29 Apr 24 10:51 UTC |
	| start   | -p nospam-205500 -n=1 --memory=2250 --wait=false                      | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:51 UTC | 29 Apr 24 10:54 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| start   | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:54 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:54 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:54 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| pause   | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:55 UTC | 29 Apr 24 10:55 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:55 UTC | 29 Apr 24 10:55 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:55 UTC | 29 Apr 24 10:55 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| unpause | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:55 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| stop    | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-205500 --log_dir                                               | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:57 UTC | 29 Apr 24 10:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| delete  | -p nospam-205500                                                      | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:57 UTC | 29 Apr 24 10:57 UTC |
	| start   | -p functional-197400                                                  | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:57 UTC | 29 Apr 24 11:01 UTC |
	|         | --memory=4000                                                         |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |         |                     |                     |
	| start   | -p functional-197400                                                  | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:01 UTC |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:01:30
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:01:30.445059   13764 out.go:291] Setting OutFile to fd 884 ...
	I0429 11:01:30.445789   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:01:30.445789   13764 out.go:304] Setting ErrFile to fd 280...
	I0429 11:01:30.445789   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:01:30.469783   13764 out.go:298] Setting JSON to false
	I0429 11:01:30.474075   13764 start.go:129] hostinfo: {"hostname":"minikube6","uptime":29963,"bootTime":1714358527,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 11:01:30.474075   13764 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 11:01:30.478082   13764 out.go:177] * [functional-197400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 11:01:30.484053   13764 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:01:30.482999   13764 notify.go:220] Checking for updates...
	I0429 11:01:30.487059   13764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:01:30.489426   13764 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 11:01:30.492314   13764 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 11:01:30.494672   13764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:01:30.497561   13764 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:01:30.498504   13764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:01:35.797581   13764 out.go:177] * Using the hyperv driver based on existing profile
	I0429 11:01:35.800821   13764 start.go:297] selected driver: hyperv
	I0429 11:01:35.800821   13764 start.go:901] validating driver "hyperv" against &{Name:functional-197400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:functional-197400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.82 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:01:35.800821   13764 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 11:01:35.854447   13764 cni.go:84] Creating CNI manager for ""
	I0429 11:01:35.854447   13764 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 11:01:35.855168   13764 start.go:340] cluster config:
	{Name:functional-197400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-197400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.82 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:01:35.855712   13764 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:01:35.860024   13764 out.go:177] * Starting "functional-197400" primary control-plane node in "functional-197400" cluster
	I0429 11:01:35.862486   13764 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 11:01:35.862966   13764 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 11:01:35.862966   13764 cache.go:56] Caching tarball of preloaded images
	I0429 11:01:35.863088   13764 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 11:01:35.863509   13764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 11:01:35.863697   13764 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\config.json ...
	I0429 11:01:35.865973   13764 start.go:360] acquireMachinesLock for functional-197400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 11:01:35.865973   13764 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-197400"
	I0429 11:01:35.865973   13764 start.go:96] Skipping create...Using existing machine configuration
	I0429 11:01:35.866728   13764 fix.go:54] fixHost starting: 
	I0429 11:01:35.866814   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:38.565164   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:38.566072   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:38.566072   13764 fix.go:112] recreateIfNeeded on functional-197400: state=Running err=<nil>
	W0429 11:01:38.566163   13764 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 11:01:38.570099   13764 out.go:177] * Updating the running hyperv "functional-197400" VM ...
	I0429 11:01:38.572589   13764 machine.go:94] provisionDockerMachine start ...
	I0429 11:01:38.572790   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:40.728211   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:40.729260   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:40.729260   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:43.337044   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:43.338056   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:43.344719   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:43.344884   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:43.344884   13764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 11:01:43.492864   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-197400
	
	I0429 11:01:43.493032   13764 buildroot.go:166] provisioning hostname "functional-197400"
	I0429 11:01:43.493146   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:45.594418   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:45.594418   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:45.595027   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:48.145598   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:48.145598   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:48.153963   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:48.154713   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:48.154713   13764 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-197400 && echo "functional-197400" | sudo tee /etc/hostname
	I0429 11:01:48.322635   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-197400
	
	I0429 11:01:48.322635   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:50.425088   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:50.425088   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:50.426116   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:52.996130   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:52.996130   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:53.002862   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:53.003355   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:53.003457   13764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-197400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-197400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-197400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:01:53.146326   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:01:53.146326   13764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 11:01:53.146326   13764 buildroot.go:174] setting up certificates
	I0429 11:01:53.146326   13764 provision.go:84] configureAuth start
	I0429 11:01:53.146326   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:57.763195   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:57.763363   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:57.763439   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:59.852676   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:59.852676   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:59.853320   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:02.368053   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:02.368053   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:02.368674   13764 provision.go:143] copyHostCerts
	I0429 11:02:02.369074   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 11:02:02.369383   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 11:02:02.369383   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 11:02:02.369931   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 11:02:02.370685   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 11:02:02.370685   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 11:02:02.370685   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 11:02:02.371650   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 11:02:02.372440   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 11:02:02.372519   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 11:02:02.372519   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 11:02:02.373046   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 11:02:02.374016   13764 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-197400 san=[127.0.0.1 172.26.179.82 functional-197400 localhost minikube]
	I0429 11:02:02.495876   13764 provision.go:177] copyRemoteCerts
	I0429 11:02:02.510020   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:02:02.510020   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:04.618809   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:04.619542   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:04.619542   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:07.167725   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:07.167725   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:07.168803   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:07.282611   13764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7725535s)
	I0429 11:02:07.282611   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 11:02:07.282611   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 11:02:07.334346   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 11:02:07.334955   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 11:02:07.390221   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 11:02:07.391689   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 11:02:07.447983   13764 provision.go:87] duration metric: took 14.3015428s to configureAuth
	I0429 11:02:07.448063   13764 buildroot.go:189] setting minikube options for container-runtime
	I0429 11:02:07.448063   13764 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:02:07.448747   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:09.549776   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:09.549776   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:09.550299   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:12.117228   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:12.117228   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:12.123983   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:12.124562   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:12.124562   13764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 11:02:12.266791   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 11:02:12.267014   13764 buildroot.go:70] root file system type: tmpfs
	I0429 11:02:12.267189   13764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 11:02:12.267262   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:14.408118   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:14.408560   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:14.408560   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:16.960938   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:16.961202   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:16.967669   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:16.968259   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:16.968427   13764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 11:02:17.143647   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 11:02:17.143855   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:21.747577   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:21.747577   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:21.755006   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:21.755589   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:21.755589   13764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 11:02:21.897946   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:02:21.897946   13764 machine.go:97] duration metric: took 43.3250104s to provisionDockerMachine
	I0429 11:02:21.897946   13764 start.go:293] postStartSetup for "functional-197400" (driver="hyperv")
	I0429 11:02:21.897946   13764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:02:21.911428   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:02:21.911428   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:26.501393   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:26.501393   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:26.502118   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:26.619226   13764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7077235s)
	I0429 11:02:26.634064   13764 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:02:26.641916   13764 command_runner.go:130] > NAME=Buildroot
	I0429 11:02:26.641916   13764 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 11:02:26.641916   13764 command_runner.go:130] > ID=buildroot
	I0429 11:02:26.641916   13764 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 11:02:26.641916   13764 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 11:02:26.641916   13764 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 11:02:26.641916   13764 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 11:02:26.642478   13764 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 11:02:26.643334   13764 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 11:02:26.643334   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 11:02:26.644676   13764 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts -> hosts in /etc/test/nested/copy/8496
	I0429 11:02:26.644676   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts -> /etc/test/nested/copy/8496/hosts
	I0429 11:02:26.657704   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8496
	I0429 11:02:26.682055   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 11:02:26.741547   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts --> /etc/test/nested/copy/8496/hosts (40 bytes)
	I0429 11:02:26.792541   13764 start.go:296] duration metric: took 4.8945563s for postStartSetup
	I0429 11:02:26.792541   13764 fix.go:56] duration metric: took 50.9254062s for fixHost
	I0429 11:02:26.792541   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:31.379441   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:31.379441   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:31.385529   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:31.385992   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:31.385992   13764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 11:02:31.514751   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714388551.520714576
	
	I0429 11:02:31.514751   13764 fix.go:216] guest clock: 1714388551.520714576
	I0429 11:02:31.514751   13764 fix.go:229] Guest: 2024-04-29 11:02:31.520714576 +0000 UTC Remote: 2024-04-29 11:02:26.7925417 +0000 UTC m=+56.526311901 (delta=4.728172876s)
	I0429 11:02:31.514751   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:33.581114   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:33.581995   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:33.581995   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:36.123230   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:36.123230   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:36.130279   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:36.131025   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:36.131025   13764 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714388551
	I0429 11:02:36.291751   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 11:02:31 UTC 2024
	
	I0429 11:02:36.291751   13764 fix.go:236] clock set: Mon Apr 29 11:02:31 UTC 2024
	 (err=<nil>)
	I0429 11:02:36.291751   13764 start.go:83] releasing machines lock for "functional-197400", held for 1m0.4252951s
	I0429 11:02:36.291751   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:38.419288   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:38.419288   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:38.419682   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:40.996072   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:40.996072   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:41.001337   13764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:02:41.001536   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:41.013399   13764 ssh_runner.go:195] Run: cat /version.json
	I0429 11:02:41.013399   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:43.158321   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:43.158321   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:43.159330   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:45.835688   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:45.836385   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:45.836904   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:45.861347   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:45.861347   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:45.862776   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:45.935735   13764 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 11:02:45.936039   13764 ssh_runner.go:235] Completed: cat /version.json: (4.9226007s)
	I0429 11:02:45.950826   13764 ssh_runner.go:195] Run: systemctl --version
	I0429 11:02:46.011745   13764 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 11:02:46.011850   13764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0103766s)
	I0429 11:02:46.011850   13764 command_runner.go:130] > systemd 252 (252)
	I0429 11:02:46.011999   13764 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 11:02:46.026211   13764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 11:02:46.035440   13764 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 11:02:46.035904   13764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 11:02:46.048490   13764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:02:46.067930   13764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 11:02:46.067930   13764 start.go:494] detecting cgroup driver to use...
	I0429 11:02:46.068188   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:02:46.104796   13764 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 11:02:46.118218   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 11:02:46.152176   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 11:02:46.174564   13764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 11:02:46.187378   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 11:02:46.221768   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:02:46.255412   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 11:02:46.290318   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:02:46.325497   13764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:02:46.367045   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 11:02:46.403208   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 11:02:46.442281   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 11:02:46.478926   13764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:02:46.499867   13764 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 11:02:46.513297   13764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:02:46.549431   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:02:46.855826   13764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 11:02:46.905389   13764 start.go:494] detecting cgroup driver to use...
	I0429 11:02:46.922503   13764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 11:02:46.951373   13764 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 11:02:46.951373   13764 command_runner.go:130] > [Unit]
	I0429 11:02:46.951373   13764 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 11:02:46.951373   13764 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 11:02:46.951373   13764 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 11:02:46.951470   13764 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 11:02:46.951470   13764 command_runner.go:130] > StartLimitBurst=3
	I0429 11:02:46.951470   13764 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 11:02:46.951470   13764 command_runner.go:130] > [Service]
	I0429 11:02:46.951507   13764 command_runner.go:130] > Type=notify
	I0429 11:02:46.951507   13764 command_runner.go:130] > Restart=on-failure
	I0429 11:02:46.951507   13764 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 11:02:46.951552   13764 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 11:02:46.951552   13764 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 11:02:46.951643   13764 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 11:02:46.951643   13764 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 11:02:46.951643   13764 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 11:02:46.951687   13764 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 11:02:46.951727   13764 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 11:02:46.951727   13764 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 11:02:46.951727   13764 command_runner.go:130] > ExecStart=
	I0429 11:02:46.951791   13764 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 11:02:46.951838   13764 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 11:02:46.951838   13764 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 11:02:46.951838   13764 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 11:02:46.951838   13764 command_runner.go:130] > LimitNOFILE=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > LimitNPROC=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > LimitCORE=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 11:02:46.951896   13764 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 11:02:46.951939   13764 command_runner.go:130] > TasksMax=infinity
	I0429 11:02:46.951939   13764 command_runner.go:130] > TimeoutStartSec=0
	I0429 11:02:46.951939   13764 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 11:02:46.951939   13764 command_runner.go:130] > Delegate=yes
	I0429 11:02:46.951939   13764 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 11:02:46.952000   13764 command_runner.go:130] > KillMode=process
	I0429 11:02:46.952000   13764 command_runner.go:130] > [Install]
	I0429 11:02:46.952000   13764 command_runner.go:130] > WantedBy=multi-user.target
	I0429 11:02:46.966498   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:02:47.010945   13764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:02:47.071693   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:02:47.111019   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:02:47.138047   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:02:47.173728   13764 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 11:02:47.188143   13764 ssh_runner.go:195] Run: which cri-dockerd
	I0429 11:02:47.196459   13764 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 11:02:47.211733   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 11:02:47.232274   13764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 11:02:47.282245   13764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 11:02:47.579073   13764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 11:02:47.847228   13764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 11:02:47.847310   13764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 11:02:47.911078   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:02:48.205114   13764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 11:03:59.569091   13764 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0429 11:03:59.569139   13764 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0429 11:03:59.569659   13764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3639778s)
	I0429 11:03:59.583436   13764 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.267173170Z" level=info msg="Starting up"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.268201295Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.269372823Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.307954249Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337171950Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337254152Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337340754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337376555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337555459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337709163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337903268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338009670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338032671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338045671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338138773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338687786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341822662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.617057   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341930064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.617127   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342068768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.617167   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342160270Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.617214   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342291773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.617232   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342561779Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.617269   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342706583Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.617269   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372846706Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.617329   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372975409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.617329   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373003310Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.617382   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373021510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.617440   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373037211Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.617474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373149113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.617531   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373464921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.617531   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373719527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.617565   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373825230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373848630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373863930Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373890031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373906532Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373921332Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373949133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373962633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373975833Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373987533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374008834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374023234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374037835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374051935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374065235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374078736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374091236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374105436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374119237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374134237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374146237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374159238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374171938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374188938Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374210239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374222939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374234739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618212   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374289741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.618255   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374332042Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374348242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374360142Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374503946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374551147Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374567747Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374816253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374962657Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375258464Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375340566Z" level=info msg="containerd successfully booted in 0.070853s"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.341207280Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.372935594Z" level=info msg="Loading containers: start."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.662471377Z" level=info msg="Loading containers: done."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686025529Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686394438Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807251972Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807726683Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.294970724Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.296140626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.297893627Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298007127Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298131828Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.373783050Z" level=info msg="Starting up"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.375739052Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.376681653Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	I0429 11:03:59.618824   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.414401489Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.618889   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443879217Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.618937   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443976817Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618937   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444032617Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.618988   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444054617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619010   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444082717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619060   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444097417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619078   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444314317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619078   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444420417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619155   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444442717Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444454017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444480517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444729817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448106421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448213221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448460321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448545421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448576221Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448595621Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448608321Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448970822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449301222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449419922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449439222Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449472722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449525422Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449797522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449993923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450015223Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450031323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450046523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450061223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450074823Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450089123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450104623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450119123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450132723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619877   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450147523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619877   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450169123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.619952   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450195823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.619975   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450213523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450228423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450242323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450317723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450340823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450355723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450370223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450386623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450404923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450419423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450433523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450450223Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450473323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450488823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450586723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450768623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450878823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450899223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450913423Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451074824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451245924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451269524Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451551924Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451703024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451799224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.452213625Z" level=info msg="containerd successfully booted in 0.040825s"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.418862644Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.441473165Z" level=info msg="Loading containers: start."
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.627479942Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.718102328Z" level=info msg="Loading containers: done."
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743113952Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743178652Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.793711400Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.794898201Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.128331474Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134282479Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134684380Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134803580Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.135077080Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.213787206Z" level=info msg="Starting up"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.215786608Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.223733215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.257297947Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285515974Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285568774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285610374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285627974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285654974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285669474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285807174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285907174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285969774Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285984174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286011074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286128374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289099977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289240777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289384778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289474878Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289505078Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289523778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289538678Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289665278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289753578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289782578Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289798778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289812978Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289861878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290650379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290847279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.621634   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291305579Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291331579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291347879Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291388179Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291418680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291448580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291464880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291477580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291490180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291506680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291528980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291545880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291563580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291578680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291590680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291602780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291614280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291626880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291639680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291658480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291677280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291691980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291721380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291739980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291796480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291812480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291829380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291878580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291897480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291908880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291974180Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292217280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292341480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292357280Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293132581Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293277181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293335781Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293541382Z" level=info msg="containerd successfully booted in 0.037246s"
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:15 functional-197400 dockerd[1330]: time="2024-04-29T11:00:15.277854617Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:17 functional-197400 dockerd[1330]: time="2024-04-29T11:00:17.927543836Z" level=info msg="Loading containers: start."
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.112045312Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.198094793Z" level=info msg="Loading containers: done."
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222645217Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222779217Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274280866Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274456266Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120296911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120512729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120543432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120660941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.186893035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187185759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187211261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.188407762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215270831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215407743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623152   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215422644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623152   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215523352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623248   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280764062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280985280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281084889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281634035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643303177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643466691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643509895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643684609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.697670368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.707267679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708026943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708256862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784290483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784407793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784468198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784707718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.819747877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821078290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821252004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.826495047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985252797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985562604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985588805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985711908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068054169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068309474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068331475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068467778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166236144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166301345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166313646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166396847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.521616981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522347196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522579101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.523240714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895048895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895152197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895172797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624251   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895676508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624251   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984381216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984458818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984485818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984841526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.507103229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509692523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509830323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.510118922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.796842343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797484742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797645142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797880641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.234836529Z" level=info msg="ignoring event" container=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235676628Z" level=info msg="shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235735628Z" level=warning msg="cleaning up after shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235745428Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.451291296Z" level=info msg="ignoring event" container=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451669095Z" level=info msg="shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451851495Z" level=warning msg="cleaning up after shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451995494Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.234860092Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450791635Z" level=info msg="shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.451090435Z" level=info msg="ignoring event" container=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450876935Z" level=warning msg="cleaning up after shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.451747135Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.482934541Z" level=info msg="ignoring event" container=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.484895642Z" level=info msg="shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.485295742Z" level=info msg="ignoring event" container=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486342742Z" level=warning msg="cleaning up after shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486585842Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486559842Z" level=info msg="shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486853242Z" level=warning msg="cleaning up after shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	I0429 11:03:59.625290   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486923642Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625290   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.494120344Z" level=info msg="ignoring event" container=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625385   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494771444Z" level=info msg="shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	I0429 11:03:59.625464   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494880444Z" level=warning msg="cleaning up after shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	I0429 11:03:59.625464   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494940744Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.507132346Z" level=info msg="ignoring event" container=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509010947Z" level=info msg="shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509090647Z" level=warning msg="cleaning up after shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	I0429 11:03:59.625678   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509108047Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625678   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531751851Z" level=info msg="ignoring event" container=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531875751Z" level=info msg="ignoring event" container=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532003151Z" level=info msg="shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532109051Z" level=warning msg="cleaning up after shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532144051Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546567054Z" level=info msg="shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546687154Z" level=warning msg="cleaning up after shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546700554Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.551454855Z" level=info msg="ignoring event" container=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.552199755Z" level=info msg="shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.553996555Z" level=warning msg="cleaning up after shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.554987256Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567471558Z" level=info msg="ignoring event" container=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567533658Z" level=info msg="ignoring event" container=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567572058Z" level=info msg="ignoring event" container=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585709762Z" level=info msg="shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585772862Z" level=warning msg="cleaning up after shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585785062Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586016062Z" level=info msg="shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586066762Z" level=warning msg="cleaning up after shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586078062Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592597763Z" level=info msg="shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592801863Z" level=warning msg="cleaning up after shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592926563Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.596528564Z" level=info msg="ignoring event" container=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.596987364Z" level=info msg="shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597025164Z" level=warning msg="cleaning up after shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597035064Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.780696301Z" level=warning msg="cleanup warnings time=\"2024-04-29T11:02:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.366929116Z" level=info msg="shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.368217817Z" level=warning msg="cleaning up after shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.369588017Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1330]: time="2024-04-29T11:02:53.370462217Z" level=info msg="ignoring event" container=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.334510807Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.391107616Z" level=info msg="ignoring event" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393713479Z" level=info msg="shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393802388Z" level=warning msg="cleaning up after shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393813489Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626648   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463540623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.626648   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463722041Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463974967Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.464010370Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Consumed 6.178s CPU time.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 dockerd[4230]: time="2024-04-29T11:02:59.547648892Z" level=info msg="Starting up"
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 dockerd[4230]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	I0429 11:03:59.655510   13764 out.go:177] 
	W0429 11:03:59.658137   13764 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 10:59:29 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.267173170Z" level=info msg="Starting up"
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.268201295Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.269372823Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.307954249Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337171950Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337254152Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337340754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337376555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337555459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337709163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337903268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338009670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338032671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338045671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338138773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338687786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341822662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341930064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342068768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342160270Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342291773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342561779Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342706583Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372846706Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372975409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373003310Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373021510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373037211Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373149113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373464921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373719527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373825230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373848630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373863930Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373890031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373906532Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373921332Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373949133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373962633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373975833Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373987533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374008834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374023234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374037835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374051935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374065235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374078736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374091236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374105436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374119237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374134237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374146237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374159238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374171938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374188938Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374210239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374222939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374234739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374289741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374332042Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374348242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374360142Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374503946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374551147Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374567747Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374816253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374962657Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375258464Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375340566Z" level=info msg="containerd successfully booted in 0.070853s"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.341207280Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.372935594Z" level=info msg="Loading containers: start."
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.662471377Z" level=info msg="Loading containers: done."
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686025529Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686394438Z" level=info msg="Daemon has completed initialization"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807251972Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 10:59:30 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807726683Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.294970724Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.296140626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:00:01 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.297893627Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298007127Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298131828Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:00:02 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:00:02 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:00:02 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.373783050Z" level=info msg="Starting up"
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.375739052Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.376681653Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.414401489Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443879217Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443976817Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444032617Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444054617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444082717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444097417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444314317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444420417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444442717Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444454017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444480517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444729817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448106421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448213221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448460321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448545421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448576221Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448595621Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448608321Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448970822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449301222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449419922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449439222Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449472722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449525422Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449797522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449993923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450015223Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450031323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450046523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450061223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450074823Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450089123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450104623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450119123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450132723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450147523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450169123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450195823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450213523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450228423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450242323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450317723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450340823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450355723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450370223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450386623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450404923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450419423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450433523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450450223Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450473323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450488823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450586723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450768623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450878823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450899223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450913423Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451074824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451245924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451269524Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451551924Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451703024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451799224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.452213625Z" level=info msg="containerd successfully booted in 0.040825s"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.418862644Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.441473165Z" level=info msg="Loading containers: start."
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.627479942Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.718102328Z" level=info msg="Loading containers: done."
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743113952Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743178652Z" level=info msg="Daemon has completed initialization"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.793711400Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.794898201Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:03 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 11:00:13 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.128331474Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134282479Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134684380Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134803580Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.135077080Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:00:14 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:00:14 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:00:14 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.213787206Z" level=info msg="Starting up"
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.215786608Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.223733215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.257297947Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285515974Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285568774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285610374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285627974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285654974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285669474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285807174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285907174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285969774Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285984174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286011074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286128374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289099977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289240777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289384778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289474878Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289505078Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289523778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289538678Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289665278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289753578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289782578Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289798778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289812978Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289861878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290650379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290847279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291305579Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291331579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291347879Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291388179Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291418680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291448580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291464880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291477580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291490180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291506680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291528980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291545880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291563580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291578680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291590680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291602780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291614280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291626880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291639680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291658480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291677280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291691980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291721380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291739980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291796480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291812480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291829380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291878580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291897480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291908880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291974180Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292217280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292341480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292357280Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293132581Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293277181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293335781Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293541382Z" level=info msg="containerd successfully booted in 0.037246s"
	Apr 29 11:00:15 functional-197400 dockerd[1330]: time="2024-04-29T11:00:15.277854617Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 11:00:17 functional-197400 dockerd[1330]: time="2024-04-29T11:00:17.927543836Z" level=info msg="Loading containers: start."
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.112045312Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.198094793Z" level=info msg="Loading containers: done."
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222645217Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222779217Z" level=info msg="Daemon has completed initialization"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274280866Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274456266Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:18 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120296911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120512729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120543432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120660941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.186893035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187185759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187211261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.188407762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215270831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215407743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215422644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215523352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280764062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280985280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281084889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281634035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643303177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643466691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643509895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643684609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.697670368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.707267679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708026943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708256862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784290483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784407793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784468198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784707718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.819747877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821078290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821252004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.826495047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985252797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985562604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985588805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985711908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068054169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068309474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068331475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068467778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166236144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166301345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166313646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166396847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.521616981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522347196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522579101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.523240714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895048895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895152197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895172797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895676508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984381216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984458818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984485818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984841526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.507103229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509692523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509830323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.510118922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.796842343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797484742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797645142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797880641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.234836529Z" level=info msg="ignoring event" container=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235676628Z" level=info msg="shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235735628Z" level=warning msg="cleaning up after shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235745428Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.451291296Z" level=info msg="ignoring event" container=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451669095Z" level=info msg="shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451851495Z" level=warning msg="cleaning up after shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451995494Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.234860092Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450791635Z" level=info msg="shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.451090435Z" level=info msg="ignoring event" container=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450876935Z" level=warning msg="cleaning up after shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.451747135Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.482934541Z" level=info msg="ignoring event" container=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.484895642Z" level=info msg="shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.485295742Z" level=info msg="ignoring event" container=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486342742Z" level=warning msg="cleaning up after shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486585842Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486559842Z" level=info msg="shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486853242Z" level=warning msg="cleaning up after shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486923642Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.494120344Z" level=info msg="ignoring event" container=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494771444Z" level=info msg="shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494880444Z" level=warning msg="cleaning up after shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494940744Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.507132346Z" level=info msg="ignoring event" container=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509010947Z" level=info msg="shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509090647Z" level=warning msg="cleaning up after shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509108047Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531751851Z" level=info msg="ignoring event" container=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531875751Z" level=info msg="ignoring event" container=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532003151Z" level=info msg="shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532109051Z" level=warning msg="cleaning up after shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532144051Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546567054Z" level=info msg="shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546687154Z" level=warning msg="cleaning up after shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546700554Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.551454855Z" level=info msg="ignoring event" container=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.552199755Z" level=info msg="shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.553996555Z" level=warning msg="cleaning up after shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.554987256Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567471558Z" level=info msg="ignoring event" container=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567533658Z" level=info msg="ignoring event" container=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567572058Z" level=info msg="ignoring event" container=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585709762Z" level=info msg="shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585772862Z" level=warning msg="cleaning up after shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585785062Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586016062Z" level=info msg="shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586066762Z" level=warning msg="cleaning up after shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586078062Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592597763Z" level=info msg="shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592801863Z" level=warning msg="cleaning up after shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592926563Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.596528564Z" level=info msg="ignoring event" container=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.596987364Z" level=info msg="shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597025164Z" level=warning msg="cleaning up after shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597035064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.780696301Z" level=warning msg="cleanup warnings time=\"2024-04-29T11:02:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.366929116Z" level=info msg="shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.368217817Z" level=warning msg="cleaning up after shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.369588017Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1330]: time="2024-04-29T11:02:53.370462217Z" level=info msg="ignoring event" container=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.334510807Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.391107616Z" level=info msg="ignoring event" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393713479Z" level=info msg="shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393802388Z" level=warning msg="cleaning up after shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393813489Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463540623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463722041Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463974967Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.464010370Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:02:59 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Consumed 6.178s CPU time.
	Apr 29 11:02:59 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:02:59 functional-197400 dockerd[4230]: time="2024-04-29T11:02:59.547648892Z" level=info msg="Starting up"
	Apr 29 11:03:59 functional-197400 dockerd[4230]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 11:03:59 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 11:03:59.659798   13764 out.go:239] * 
	W0429 11:03:59.661047   13764 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 11:03:59.665567   13764 out.go:177] 
	
	
	==> Docker <==
	Apr 29 11:07:00 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 11:07:00 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 29 11:07:00 functional-197400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Apr 29 11:07:00 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:07:00 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:07:00 functional-197400 dockerd[5288]: time="2024-04-29T11:07:00.694794756Z" level=info msg="Starting up"
	Apr 29 11:08:00 functional-197400 dockerd[5288]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 11:08:00 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 11:08:00 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 11:08:00 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="error getting RW layer size for container ID 'd25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e'"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="error getting RW layer size for container ID '4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523'"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="error getting RW layer size for container ID '484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID '484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e'"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="error getting RW layer size for container ID '3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf'"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="error getting RW layer size for container ID '02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID '02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006'"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="error getting RW layer size for container ID '1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb'"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="error getting RW layer size for container ID '4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b'"
	Apr 29 11:08:00 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:08:00Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-29T11:08:02Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.105646] kauditd_printk_skb: 59 callbacks suppressed
	[Apr29 11:00] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
	[  +0.209517] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +0.250233] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +2.826178] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.204930] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.214674] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.298087] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +8.281397] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.104642] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.002494] kauditd_printk_skb: 34 callbacks suppressed
	[  +0.574798] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.643589] systemd-fstab-generator[1722]: Ignoring "noauto" option for root device
	[  +0.110283] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.558246] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
	[  +0.179717] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.880707] systemd-fstab-generator[2345]: Ignoring "noauto" option for root device
	[  +0.211200] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.322426] kauditd_printk_skb: 88 callbacks suppressed
	[Apr29 11:01] kauditd_printk_skb: 10 callbacks suppressed
	[Apr29 11:02] systemd-fstab-generator[3767]: Ignoring "noauto" option for root device
	[  +0.708559] systemd-fstab-generator[3803]: Ignoring "noauto" option for root device
	[  +0.292965] systemd-fstab-generator[3815]: Ignoring "noauto" option for root device
	[  +0.339194] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +5.346272] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 11:09:01 up 10 min,  0 users,  load average: 0.01, 0.10, 0.09
	Linux functional-197400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 29 11:08:55 functional-197400 kubelet[2131]: E0429 11:08:55.733009    2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.26.179.82:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-functional-197400.17cabb52ad13af9a  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-functional-197400,UID:521fd6f1bee307afaa02270407a34b9e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/healthz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-197400,},FirstTimestamp:2024-04-29 11:02:51.93335593 +0000 UTC m=+137.737027065,LastTimestamp:2024-04-29 11:02:51.93335593 +0000 UTC m=+137.737027065,
Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-197400,}"
	Apr 29 11:08:55 functional-197400 kubelet[2131]: E0429 11:08:55.835650    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?resourceVersion=0&timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:08:55 functional-197400 kubelet[2131]: E0429 11:08:55.837333    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:08:55 functional-197400 kubelet[2131]: E0429 11:08:55.838767    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:08:55 functional-197400 kubelet[2131]: E0429 11:08:55.840403    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:08:55 functional-197400 kubelet[2131]: E0429 11:08:55.841385    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:08:55 functional-197400 kubelet[2131]: E0429 11:08:55.841476    2131 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 29 11:08:55 functional-197400 kubelet[2131]: E0429 11:08:55.942126    2131 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m8.065230709s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.929565    2131 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.938492    2131 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.938599    2131 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.938448    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.938634    2131 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.938660    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.938688    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.939032    2131 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.939140    2131 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.939170    2131 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.939193    2131 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: I0429 11:09:00.939207    2131 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.940111    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.940362    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.940723    2131 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Apr 29 11:09:00 functional-197400 kubelet[2131]: E0429 11:09:00.942355    2131 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m13.065468482s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 29 11:09:01 functional-197400 kubelet[2131]: E0429 11:09:01.118901    2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:06:35.096619   14248 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0429 11:07:00.438599   14248 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:07:00.472739   14248 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:07:00.503809   14248 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:07:00.534544   14248 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:07:00.565452   14248 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:07:00.595711   14248 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:08:00.714842   14248 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:08:00.746740   14248 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-197400 -n functional-197400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-197400 -n functional-197400: exit status 2 (11.8560184s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:09:01.854694    7384 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-197400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (180.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-197400 ssh sudo crictl images: exit status 1 (11.316649s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:16:03.993176    8184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1122: failed to get images by "out/minikube-windows-amd64.exe -p functional-197400 ssh sudo crictl images" ssh exit status 1
functional_test.go:1126: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:16:03.993176    8184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (179.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh sudo docker rmi registry.k8s.io/pause:latest
E0429 11:16:27.437047    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-197400 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 1 (47.8074518s)

                                                
                                                
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:16:15.325014    2044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1146: failed to manually delete image "out/minikube-windows-amd64.exe -p functional-197400 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 1
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-197400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (11.1497978s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:17:03.123482    1524 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 cache reload: (1m49.3749414s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-197400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (11.2690632s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:19:03.647952   12596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1161: expected "out/minikube-windows-amd64.exe -p functional-197400 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (179.60s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (180.45s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 kubectl -- --context functional-197400 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-197400 kubectl -- --context functional-197400 get pods: exit status 1 (10.7896927s)

                                                
                                                
** stderr ** 
	W0429 11:22:17.100422    8368 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0429 11:22:19.494714    4184 memcache.go:265] couldn't get current server API group list: Get "https://172.26.179.82:8441/api?timeout=32s": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0429 11:22:21.620543    4184 memcache.go:265] couldn't get current server API group list: Get "https://172.26.179.82:8441/api?timeout=32s": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0429 11:22:23.658896    4184 memcache.go:265] couldn't get current server API group list: Get "https://172.26.179.82:8441/api?timeout=32s": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0429 11:22:25.685420    4184 memcache.go:265] couldn't get current server API group list: Get "https://172.26.179.82:8441/api?timeout=32s": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0429 11:22:27.729369    4184 memcache.go:265] couldn't get current server API group list: Get "https://172.26.179.82:8441/api?timeout=32s": dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.26.179.82:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-197400 kubectl -- --context functional-197400 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-197400 -n functional-197400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-197400 -n functional-197400: exit status 2 (11.7105939s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:22:27.879718    8752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 logs -n 25
E0429 11:24:30.643240    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 logs -n 25: (2m25.7325483s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:55 UTC | 29 Apr 24 10:55 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:55 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:57 UTC | 29 Apr 24 10:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-205500                                            | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:57 UTC | 29 Apr 24 10:57 UTC |
	| start   | -p functional-197400                                        | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:57 UTC | 29 Apr 24 11:01 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-197400                                        | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:01 UTC |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-197400 cache add                                 | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:09 UTC | 29 Apr 24 11:11 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-197400 cache add                                 | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:11 UTC | 29 Apr 24 11:13 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-197400 cache add                                 | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:13 UTC | 29 Apr 24 11:15 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-197400 cache add                                 | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:15 UTC | 29 Apr 24 11:16 UTC |
	|         | minikube-local-cache-test:functional-197400                 |                   |                   |         |                     |                     |
	| cache   | functional-197400 cache delete                              | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:16 UTC | 29 Apr 24 11:16 UTC |
	|         | minikube-local-cache-test:functional-197400                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:16 UTC | 29 Apr 24 11:16 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:16 UTC | 29 Apr 24 11:16 UTC |
	| ssh     | functional-197400 ssh sudo                                  | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:16 UTC |                     |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-197400                                           | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:16 UTC |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-197400 ssh                                       | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:17 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-197400 cache reload                              | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:17 UTC | 29 Apr 24 11:19 UTC |
	| ssh     | functional-197400 ssh                                       | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:19 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:19 UTC | 29 Apr 24 11:19 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:19 UTC | 29 Apr 24 11:19 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-197400 kubectl --                                | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:22 UTC |                     |
	|         | --context functional-197400                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:01:30
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:01:30.445059   13764 out.go:291] Setting OutFile to fd 884 ...
	I0429 11:01:30.445789   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:01:30.445789   13764 out.go:304] Setting ErrFile to fd 280...
	I0429 11:01:30.445789   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:01:30.469783   13764 out.go:298] Setting JSON to false
	I0429 11:01:30.474075   13764 start.go:129] hostinfo: {"hostname":"minikube6","uptime":29963,"bootTime":1714358527,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 11:01:30.474075   13764 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 11:01:30.478082   13764 out.go:177] * [functional-197400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 11:01:30.484053   13764 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:01:30.482999   13764 notify.go:220] Checking for updates...
	I0429 11:01:30.487059   13764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:01:30.489426   13764 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 11:01:30.492314   13764 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 11:01:30.494672   13764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:01:30.497561   13764 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:01:30.498504   13764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:01:35.797581   13764 out.go:177] * Using the hyperv driver based on existing profile
	I0429 11:01:35.800821   13764 start.go:297] selected driver: hyperv
	I0429 11:01:35.800821   13764 start.go:901] validating driver "hyperv" against &{Name:functional-197400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:functional-197400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.82 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:01:35.800821   13764 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 11:01:35.854447   13764 cni.go:84] Creating CNI manager for ""
	I0429 11:01:35.854447   13764 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 11:01:35.855168   13764 start.go:340] cluster config:
	{Name:functional-197400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-197400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.82 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:01:35.855712   13764 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:01:35.860024   13764 out.go:177] * Starting "functional-197400" primary control-plane node in "functional-197400" cluster
	I0429 11:01:35.862486   13764 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 11:01:35.862966   13764 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 11:01:35.862966   13764 cache.go:56] Caching tarball of preloaded images
	I0429 11:01:35.863088   13764 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 11:01:35.863509   13764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 11:01:35.863697   13764 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\config.json ...
	I0429 11:01:35.865973   13764 start.go:360] acquireMachinesLock for functional-197400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 11:01:35.865973   13764 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-197400"
	I0429 11:01:35.865973   13764 start.go:96] Skipping create...Using existing machine configuration
	I0429 11:01:35.866728   13764 fix.go:54] fixHost starting: 
	I0429 11:01:35.866814   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:38.565164   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:38.566072   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:38.566072   13764 fix.go:112] recreateIfNeeded on functional-197400: state=Running err=<nil>
	W0429 11:01:38.566163   13764 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 11:01:38.570099   13764 out.go:177] * Updating the running hyperv "functional-197400" VM ...
	I0429 11:01:38.572589   13764 machine.go:94] provisionDockerMachine start ...
	I0429 11:01:38.572790   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:40.728211   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:40.729260   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:40.729260   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:43.337044   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:43.338056   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:43.344719   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:43.344884   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:43.344884   13764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 11:01:43.492864   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-197400
	
	I0429 11:01:43.493032   13764 buildroot.go:166] provisioning hostname "functional-197400"
	I0429 11:01:43.493146   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:45.594418   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:45.594418   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:45.595027   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:48.145598   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:48.145598   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:48.153963   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:48.154713   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:48.154713   13764 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-197400 && echo "functional-197400" | sudo tee /etc/hostname
	I0429 11:01:48.322635   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-197400
	
	I0429 11:01:48.322635   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:50.425088   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:50.425088   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:50.426116   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:52.996130   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:52.996130   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:53.002862   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:53.003355   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:53.003457   13764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-197400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-197400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-197400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:01:53.146326   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:01:53.146326   13764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 11:01:53.146326   13764 buildroot.go:174] setting up certificates
	I0429 11:01:53.146326   13764 provision.go:84] configureAuth start
	I0429 11:01:53.146326   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:57.763195   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:57.763363   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:57.763439   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:59.852676   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:59.852676   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:59.853320   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:02.368053   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:02.368053   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:02.368674   13764 provision.go:143] copyHostCerts
	I0429 11:02:02.369074   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 11:02:02.369383   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 11:02:02.369383   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 11:02:02.369931   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 11:02:02.370685   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 11:02:02.370685   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 11:02:02.370685   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 11:02:02.371650   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 11:02:02.372440   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 11:02:02.372519   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 11:02:02.372519   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 11:02:02.373046   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 11:02:02.374016   13764 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-197400 san=[127.0.0.1 172.26.179.82 functional-197400 localhost minikube]
	I0429 11:02:02.495876   13764 provision.go:177] copyRemoteCerts
	I0429 11:02:02.510020   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:02:02.510020   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:04.618809   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:04.619542   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:04.619542   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:07.167725   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:07.167725   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:07.168803   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:07.282611   13764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7725535s)
	I0429 11:02:07.282611   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 11:02:07.282611   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 11:02:07.334346   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 11:02:07.334955   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 11:02:07.390221   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 11:02:07.391689   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 11:02:07.447983   13764 provision.go:87] duration metric: took 14.3015428s to configureAuth
	I0429 11:02:07.448063   13764 buildroot.go:189] setting minikube options for container-runtime
	I0429 11:02:07.448063   13764 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:02:07.448747   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:09.549776   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:09.549776   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:09.550299   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:12.117228   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:12.117228   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:12.123983   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:12.124562   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:12.124562   13764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 11:02:12.266791   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 11:02:12.267014   13764 buildroot.go:70] root file system type: tmpfs
	I0429 11:02:12.267189   13764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 11:02:12.267262   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:14.408118   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:14.408560   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:14.408560   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:16.960938   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:16.961202   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:16.967669   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:16.968259   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:16.968427   13764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 11:02:17.143647   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 11:02:17.143855   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:21.747577   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:21.747577   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:21.755006   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:21.755589   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:21.755589   13764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 11:02:21.897946   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:02:21.897946   13764 machine.go:97] duration metric: took 43.3250104s to provisionDockerMachine
	I0429 11:02:21.897946   13764 start.go:293] postStartSetup for "functional-197400" (driver="hyperv")
	I0429 11:02:21.897946   13764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:02:21.911428   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:02:21.911428   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:26.501393   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:26.501393   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:26.502118   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:26.619226   13764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7077235s)
	I0429 11:02:26.634064   13764 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:02:26.641916   13764 command_runner.go:130] > NAME=Buildroot
	I0429 11:02:26.641916   13764 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 11:02:26.641916   13764 command_runner.go:130] > ID=buildroot
	I0429 11:02:26.641916   13764 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 11:02:26.641916   13764 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 11:02:26.641916   13764 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 11:02:26.641916   13764 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 11:02:26.642478   13764 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 11:02:26.643334   13764 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 11:02:26.643334   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 11:02:26.644676   13764 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts -> hosts in /etc/test/nested/copy/8496
	I0429 11:02:26.644676   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts -> /etc/test/nested/copy/8496/hosts
	I0429 11:02:26.657704   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8496
	I0429 11:02:26.682055   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 11:02:26.741547   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts --> /etc/test/nested/copy/8496/hosts (40 bytes)
	I0429 11:02:26.792541   13764 start.go:296] duration metric: took 4.8945563s for postStartSetup
	I0429 11:02:26.792541   13764 fix.go:56] duration metric: took 50.9254062s for fixHost
	I0429 11:02:26.792541   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:31.379441   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:31.379441   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:31.385529   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:31.385992   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:31.385992   13764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 11:02:31.514751   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714388551.520714576
	
	I0429 11:02:31.514751   13764 fix.go:216] guest clock: 1714388551.520714576
	I0429 11:02:31.514751   13764 fix.go:229] Guest: 2024-04-29 11:02:31.520714576 +0000 UTC Remote: 2024-04-29 11:02:26.7925417 +0000 UTC m=+56.526311901 (delta=4.728172876s)
	I0429 11:02:31.514751   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:33.581114   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:33.581995   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:33.581995   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:36.123230   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:36.123230   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:36.130279   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:36.131025   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:36.131025   13764 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714388551
	I0429 11:02:36.291751   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 11:02:31 UTC 2024
	
	I0429 11:02:36.291751   13764 fix.go:236] clock set: Mon Apr 29 11:02:31 UTC 2024
	 (err=<nil>)
	I0429 11:02:36.291751   13764 start.go:83] releasing machines lock for "functional-197400", held for 1m0.4252951s
	I0429 11:02:36.291751   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:38.419288   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:38.419288   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:38.419682   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:40.996072   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:40.996072   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:41.001337   13764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:02:41.001536   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:41.013399   13764 ssh_runner.go:195] Run: cat /version.json
	I0429 11:02:41.013399   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:43.158321   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:43.158321   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:43.159330   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:45.835688   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:45.836385   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:45.836904   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:45.861347   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:45.861347   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:45.862776   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:45.935735   13764 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 11:02:45.936039   13764 ssh_runner.go:235] Completed: cat /version.json: (4.9226007s)
	I0429 11:02:45.950826   13764 ssh_runner.go:195] Run: systemctl --version
	I0429 11:02:46.011745   13764 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 11:02:46.011850   13764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0103766s)
	I0429 11:02:46.011850   13764 command_runner.go:130] > systemd 252 (252)
	I0429 11:02:46.011999   13764 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 11:02:46.026211   13764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 11:02:46.035440   13764 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 11:02:46.035904   13764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 11:02:46.048490   13764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:02:46.067930   13764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 11:02:46.067930   13764 start.go:494] detecting cgroup driver to use...
	I0429 11:02:46.068188   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:02:46.104796   13764 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 11:02:46.118218   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 11:02:46.152176   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 11:02:46.174564   13764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 11:02:46.187378   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 11:02:46.221768   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:02:46.255412   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 11:02:46.290318   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:02:46.325497   13764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:02:46.367045   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 11:02:46.403208   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 11:02:46.442281   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 11:02:46.478926   13764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:02:46.499867   13764 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 11:02:46.513297   13764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:02:46.549431   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:02:46.855826   13764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 11:02:46.905389   13764 start.go:494] detecting cgroup driver to use...
	I0429 11:02:46.922503   13764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 11:02:46.951373   13764 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 11:02:46.951373   13764 command_runner.go:130] > [Unit]
	I0429 11:02:46.951373   13764 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 11:02:46.951373   13764 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 11:02:46.951373   13764 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 11:02:46.951470   13764 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 11:02:46.951470   13764 command_runner.go:130] > StartLimitBurst=3
	I0429 11:02:46.951470   13764 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 11:02:46.951470   13764 command_runner.go:130] > [Service]
	I0429 11:02:46.951507   13764 command_runner.go:130] > Type=notify
	I0429 11:02:46.951507   13764 command_runner.go:130] > Restart=on-failure
	I0429 11:02:46.951507   13764 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 11:02:46.951552   13764 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 11:02:46.951552   13764 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 11:02:46.951643   13764 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 11:02:46.951643   13764 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 11:02:46.951643   13764 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 11:02:46.951687   13764 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 11:02:46.951727   13764 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 11:02:46.951727   13764 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 11:02:46.951727   13764 command_runner.go:130] > ExecStart=
	I0429 11:02:46.951791   13764 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 11:02:46.951838   13764 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 11:02:46.951838   13764 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 11:02:46.951838   13764 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 11:02:46.951838   13764 command_runner.go:130] > LimitNOFILE=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > LimitNPROC=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > LimitCORE=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 11:02:46.951896   13764 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 11:02:46.951939   13764 command_runner.go:130] > TasksMax=infinity
	I0429 11:02:46.951939   13764 command_runner.go:130] > TimeoutStartSec=0
	I0429 11:02:46.951939   13764 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 11:02:46.951939   13764 command_runner.go:130] > Delegate=yes
	I0429 11:02:46.951939   13764 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 11:02:46.952000   13764 command_runner.go:130] > KillMode=process
	I0429 11:02:46.952000   13764 command_runner.go:130] > [Install]
	I0429 11:02:46.952000   13764 command_runner.go:130] > WantedBy=multi-user.target
	I0429 11:02:46.966498   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:02:47.010945   13764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:02:47.071693   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:02:47.111019   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:02:47.138047   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:02:47.173728   13764 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 11:02:47.188143   13764 ssh_runner.go:195] Run: which cri-dockerd
	I0429 11:02:47.196459   13764 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 11:02:47.211733   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 11:02:47.232274   13764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 11:02:47.282245   13764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 11:02:47.579073   13764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 11:02:47.847228   13764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 11:02:47.847310   13764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 11:02:47.911078   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:02:48.205114   13764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 11:03:59.569091   13764 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0429 11:03:59.569139   13764 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0429 11:03:59.569659   13764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3639778s)
	I0429 11:03:59.583436   13764 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.267173170Z" level=info msg="Starting up"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.268201295Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.269372823Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.307954249Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337171950Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337254152Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337340754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337376555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337555459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337709163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337903268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338009670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338032671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338045671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338138773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338687786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341822662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.617057   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341930064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.617127   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342068768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.617167   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342160270Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.617214   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342291773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.617232   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342561779Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.617269   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342706583Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.617269   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372846706Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.617329   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372975409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.617329   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373003310Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.617382   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373021510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.617440   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373037211Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.617474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373149113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.617531   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373464921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.617531   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373719527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.617565   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373825230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373848630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373863930Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373890031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373906532Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373921332Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373949133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373962633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373975833Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373987533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374008834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374023234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374037835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374051935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374065235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374078736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374091236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374105436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374119237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374134237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374146237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374159238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374171938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374188938Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374210239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374222939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374234739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618212   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374289741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.618255   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374332042Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374348242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374360142Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374503946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374551147Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374567747Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374816253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374962657Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375258464Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375340566Z" level=info msg="containerd successfully booted in 0.070853s"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.341207280Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.372935594Z" level=info msg="Loading containers: start."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.662471377Z" level=info msg="Loading containers: done."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686025529Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686394438Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807251972Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807726683Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.294970724Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.296140626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.297893627Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298007127Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298131828Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.373783050Z" level=info msg="Starting up"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.375739052Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.376681653Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	I0429 11:03:59.618824   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.414401489Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.618889   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443879217Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.618937   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443976817Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618937   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444032617Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.618988   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444054617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619010   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444082717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619060   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444097417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619078   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444314317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619078   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444420417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619155   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444442717Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444454017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444480517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444729817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448106421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448213221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448460321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448545421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448576221Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448595621Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448608321Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448970822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449301222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449419922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449439222Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449472722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449525422Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449797522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449993923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450015223Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450031323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450046523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450061223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450074823Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450089123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450104623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450119123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450132723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619877   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450147523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619877   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450169123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.619952   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450195823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.619975   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450213523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450228423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450242323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450317723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450340823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450355723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450370223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450386623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450404923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450419423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450433523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450450223Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450473323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450488823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450586723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450768623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450878823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450899223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450913423Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451074824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451245924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451269524Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451551924Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451703024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451799224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.452213625Z" level=info msg="containerd successfully booted in 0.040825s"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.418862644Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.441473165Z" level=info msg="Loading containers: start."
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.627479942Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.718102328Z" level=info msg="Loading containers: done."
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743113952Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743178652Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.793711400Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.794898201Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.128331474Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134282479Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134684380Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134803580Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.135077080Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.213787206Z" level=info msg="Starting up"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.215786608Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.223733215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.257297947Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285515974Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285568774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285610374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285627974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285654974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285669474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285807174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285907174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285969774Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285984174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286011074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286128374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289099977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289240777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289384778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289474878Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289505078Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289523778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289538678Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289665278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289753578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289782578Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289798778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289812978Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289861878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290650379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290847279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.621634   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291305579Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291331579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291347879Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291388179Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291418680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291448580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291464880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291477580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291490180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291506680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291528980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291545880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291563580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291578680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291590680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291602780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291614280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291626880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291639680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291658480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291677280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291691980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291721380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291739980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291796480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291812480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291829380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291878580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291897480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291908880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291974180Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292217280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292341480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292357280Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293132581Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293277181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293335781Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293541382Z" level=info msg="containerd successfully booted in 0.037246s"
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:15 functional-197400 dockerd[1330]: time="2024-04-29T11:00:15.277854617Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:17 functional-197400 dockerd[1330]: time="2024-04-29T11:00:17.927543836Z" level=info msg="Loading containers: start."
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.112045312Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.198094793Z" level=info msg="Loading containers: done."
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222645217Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222779217Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274280866Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274456266Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120296911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120512729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120543432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120660941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.186893035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187185759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187211261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.188407762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215270831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215407743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623152   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215422644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623152   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215523352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623248   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280764062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280985280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281084889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281634035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643303177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643466691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643509895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643684609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.697670368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.707267679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708026943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708256862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784290483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784407793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784468198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784707718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.819747877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821078290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821252004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.826495047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985252797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985562604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985588805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985711908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068054169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068309474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068331475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068467778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166236144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166301345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166313646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166396847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.521616981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522347196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522579101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.523240714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895048895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895152197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895172797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624251   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895676508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624251   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984381216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984458818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984485818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984841526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.507103229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509692523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509830323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.510118922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.796842343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797484742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797645142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797880641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.234836529Z" level=info msg="ignoring event" container=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235676628Z" level=info msg="shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235735628Z" level=warning msg="cleaning up after shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235745428Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.451291296Z" level=info msg="ignoring event" container=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451669095Z" level=info msg="shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451851495Z" level=warning msg="cleaning up after shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451995494Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.234860092Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450791635Z" level=info msg="shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.451090435Z" level=info msg="ignoring event" container=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450876935Z" level=warning msg="cleaning up after shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.451747135Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.482934541Z" level=info msg="ignoring event" container=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.484895642Z" level=info msg="shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.485295742Z" level=info msg="ignoring event" container=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486342742Z" level=warning msg="cleaning up after shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486585842Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486559842Z" level=info msg="shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486853242Z" level=warning msg="cleaning up after shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	I0429 11:03:59.625290   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486923642Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625290   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.494120344Z" level=info msg="ignoring event" container=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625385   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494771444Z" level=info msg="shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	I0429 11:03:59.625464   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494880444Z" level=warning msg="cleaning up after shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	I0429 11:03:59.625464   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494940744Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.507132346Z" level=info msg="ignoring event" container=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509010947Z" level=info msg="shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509090647Z" level=warning msg="cleaning up after shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	I0429 11:03:59.625678   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509108047Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625678   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531751851Z" level=info msg="ignoring event" container=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531875751Z" level=info msg="ignoring event" container=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532003151Z" level=info msg="shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532109051Z" level=warning msg="cleaning up after shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532144051Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546567054Z" level=info msg="shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546687154Z" level=warning msg="cleaning up after shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546700554Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.551454855Z" level=info msg="ignoring event" container=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.552199755Z" level=info msg="shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.553996555Z" level=warning msg="cleaning up after shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.554987256Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567471558Z" level=info msg="ignoring event" container=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567533658Z" level=info msg="ignoring event" container=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567572058Z" level=info msg="ignoring event" container=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585709762Z" level=info msg="shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585772862Z" level=warning msg="cleaning up after shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585785062Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586016062Z" level=info msg="shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586066762Z" level=warning msg="cleaning up after shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586078062Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592597763Z" level=info msg="shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592801863Z" level=warning msg="cleaning up after shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592926563Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.596528564Z" level=info msg="ignoring event" container=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.596987364Z" level=info msg="shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597025164Z" level=warning msg="cleaning up after shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597035064Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.780696301Z" level=warning msg="cleanup warnings time=\"2024-04-29T11:02:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.366929116Z" level=info msg="shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.368217817Z" level=warning msg="cleaning up after shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.369588017Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1330]: time="2024-04-29T11:02:53.370462217Z" level=info msg="ignoring event" container=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.334510807Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.391107616Z" level=info msg="ignoring event" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393713479Z" level=info msg="shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393802388Z" level=warning msg="cleaning up after shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393813489Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626648   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463540623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.626648   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463722041Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463974967Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.464010370Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Consumed 6.178s CPU time.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 dockerd[4230]: time="2024-04-29T11:02:59.547648892Z" level=info msg="Starting up"
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 dockerd[4230]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	I0429 11:03:59.655510   13764 out.go:177] 
	W0429 11:03:59.658137   13764 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 10:59:29 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.267173170Z" level=info msg="Starting up"
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.268201295Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.269372823Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.307954249Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337171950Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337254152Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337340754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337376555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337555459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337709163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337903268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338009670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338032671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338045671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338138773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338687786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341822662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341930064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342068768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342160270Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342291773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342561779Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342706583Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372846706Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372975409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373003310Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373021510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373037211Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373149113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373464921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373719527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373825230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373848630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373863930Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373890031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373906532Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373921332Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373949133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373962633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373975833Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373987533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374008834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374023234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374037835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374051935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374065235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374078736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374091236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374105436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374119237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374134237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374146237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374159238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374171938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374188938Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374210239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374222939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374234739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374289741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374332042Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374348242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374360142Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374503946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374551147Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374567747Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374816253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374962657Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375258464Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375340566Z" level=info msg="containerd successfully booted in 0.070853s"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.341207280Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.372935594Z" level=info msg="Loading containers: start."
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.662471377Z" level=info msg="Loading containers: done."
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686025529Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686394438Z" level=info msg="Daemon has completed initialization"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807251972Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 10:59:30 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807726683Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.294970724Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.296140626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:00:01 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.297893627Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298007127Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298131828Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:00:02 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:00:02 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:00:02 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.373783050Z" level=info msg="Starting up"
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.375739052Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.376681653Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.414401489Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443879217Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443976817Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444032617Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444054617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444082717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444097417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444314317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444420417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444442717Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444454017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444480517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444729817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448106421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448213221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448460321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448545421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448576221Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448595621Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448608321Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448970822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449301222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449419922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449439222Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449472722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449525422Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449797522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449993923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450015223Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450031323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450046523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450061223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450074823Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450089123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450104623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450119123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450132723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450147523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450169123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450195823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450213523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450228423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450242323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450317723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450340823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450355723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450370223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450386623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450404923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450419423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450433523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450450223Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450473323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450488823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450586723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450768623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450878823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450899223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450913423Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451074824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451245924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451269524Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451551924Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451703024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451799224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.452213625Z" level=info msg="containerd successfully booted in 0.040825s"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.418862644Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.441473165Z" level=info msg="Loading containers: start."
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.627479942Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.718102328Z" level=info msg="Loading containers: done."
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743113952Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743178652Z" level=info msg="Daemon has completed initialization"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.793711400Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.794898201Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:03 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 11:00:13 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.128331474Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134282479Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134684380Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134803580Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.135077080Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:00:14 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:00:14 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:00:14 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.213787206Z" level=info msg="Starting up"
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.215786608Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.223733215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.257297947Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285515974Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285568774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285610374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285627974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285654974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285669474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285807174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285907174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285969774Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285984174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286011074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286128374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289099977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289240777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289384778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289474878Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289505078Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289523778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289538678Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289665278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289753578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289782578Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289798778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289812978Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289861878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290650379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290847279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291305579Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291331579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291347879Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291388179Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291418680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291448580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291464880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291477580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291490180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291506680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291528980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291545880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291563580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291578680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291590680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291602780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291614280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291626880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291639680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291658480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291677280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291691980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291721380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291739980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291796480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291812480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291829380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291878580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291897480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291908880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291974180Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292217280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292341480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292357280Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293132581Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293277181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293335781Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293541382Z" level=info msg="containerd successfully booted in 0.037246s"
	Apr 29 11:00:15 functional-197400 dockerd[1330]: time="2024-04-29T11:00:15.277854617Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 11:00:17 functional-197400 dockerd[1330]: time="2024-04-29T11:00:17.927543836Z" level=info msg="Loading containers: start."
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.112045312Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.198094793Z" level=info msg="Loading containers: done."
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222645217Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222779217Z" level=info msg="Daemon has completed initialization"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274280866Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274456266Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:18 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120296911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120512729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120543432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120660941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.186893035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187185759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187211261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.188407762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215270831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215407743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215422644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215523352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280764062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280985280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281084889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281634035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643303177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643466691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643509895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643684609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.697670368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.707267679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708026943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708256862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784290483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784407793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784468198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784707718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.819747877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821078290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821252004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.826495047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985252797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985562604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985588805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985711908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068054169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068309474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068331475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068467778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166236144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166301345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166313646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166396847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.521616981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522347196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522579101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.523240714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895048895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895152197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895172797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895676508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984381216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984458818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984485818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984841526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.507103229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509692523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509830323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.510118922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.796842343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797484742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797645142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797880641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.234836529Z" level=info msg="ignoring event" container=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235676628Z" level=info msg="shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235735628Z" level=warning msg="cleaning up after shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235745428Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.451291296Z" level=info msg="ignoring event" container=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451669095Z" level=info msg="shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451851495Z" level=warning msg="cleaning up after shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451995494Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.234860092Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450791635Z" level=info msg="shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.451090435Z" level=info msg="ignoring event" container=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450876935Z" level=warning msg="cleaning up after shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.451747135Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.482934541Z" level=info msg="ignoring event" container=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.484895642Z" level=info msg="shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.485295742Z" level=info msg="ignoring event" container=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486342742Z" level=warning msg="cleaning up after shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486585842Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486559842Z" level=info msg="shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486853242Z" level=warning msg="cleaning up after shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486923642Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.494120344Z" level=info msg="ignoring event" container=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494771444Z" level=info msg="shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494880444Z" level=warning msg="cleaning up after shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494940744Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.507132346Z" level=info msg="ignoring event" container=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509010947Z" level=info msg="shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509090647Z" level=warning msg="cleaning up after shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509108047Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531751851Z" level=info msg="ignoring event" container=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531875751Z" level=info msg="ignoring event" container=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532003151Z" level=info msg="shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532109051Z" level=warning msg="cleaning up after shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532144051Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546567054Z" level=info msg="shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546687154Z" level=warning msg="cleaning up after shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546700554Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.551454855Z" level=info msg="ignoring event" container=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.552199755Z" level=info msg="shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.553996555Z" level=warning msg="cleaning up after shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.554987256Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567471558Z" level=info msg="ignoring event" container=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567533658Z" level=info msg="ignoring event" container=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567572058Z" level=info msg="ignoring event" container=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585709762Z" level=info msg="shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585772862Z" level=warning msg="cleaning up after shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585785062Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586016062Z" level=info msg="shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586066762Z" level=warning msg="cleaning up after shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586078062Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592597763Z" level=info msg="shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592801863Z" level=warning msg="cleaning up after shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592926563Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.596528564Z" level=info msg="ignoring event" container=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.596987364Z" level=info msg="shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597025164Z" level=warning msg="cleaning up after shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597035064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.780696301Z" level=warning msg="cleanup warnings time=\"2024-04-29T11:02:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.366929116Z" level=info msg="shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.368217817Z" level=warning msg="cleaning up after shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.369588017Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1330]: time="2024-04-29T11:02:53.370462217Z" level=info msg="ignoring event" container=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.334510807Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.391107616Z" level=info msg="ignoring event" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393713479Z" level=info msg="shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393802388Z" level=warning msg="cleaning up after shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393813489Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463540623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463722041Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463974967Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.464010370Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:02:59 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Consumed 6.178s CPU time.
	Apr 29 11:02:59 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:02:59 functional-197400 dockerd[4230]: time="2024-04-29T11:02:59.547648892Z" level=info msg="Starting up"
	Apr 29 11:03:59 functional-197400 dockerd[4230]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 11:03:59 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 11:03:59.659798   13764 out.go:239] * 
	W0429 11:03:59.661047   13764 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 11:03:59.665567   13764 out.go:177] 
	
	
	==> Docker <==
	Apr 29 11:23:04 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:23:04 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:23:04 functional-197400 dockerd[9092]: time="2024-04-29T11:23:04.695569047Z" level=info msg="Starting up"
	Apr 29 11:24:04 functional-197400 dockerd[9092]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 11:24:04 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 11:24:04 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 11:24:04 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="error getting RW layer size for container ID '4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523'"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="error getting RW layer size for container ID '1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb'"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="error getting RW layer size for container ID '4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b'"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="error getting RW layer size for container ID '3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf'"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="error getting RW layer size for container ID '02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="Set backoffDuration to : 1m0s for container ID '02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006'"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="error getting RW layer size for container ID '484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="Set backoffDuration to : 1m0s for container ID '484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e'"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="error getting RW layer size for container ID 'd25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e'"
	Apr 29 11:24:04 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:24:04Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 29 11:24:04 functional-197400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Apr 29 11:24:04 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:24:04 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-29T11:24:06Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.105646] kauditd_printk_skb: 59 callbacks suppressed
	[Apr29 11:00] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
	[  +0.209517] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +0.250233] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +2.826178] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.204930] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.214674] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.298087] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +8.281397] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.104642] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.002494] kauditd_printk_skb: 34 callbacks suppressed
	[  +0.574798] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.643589] systemd-fstab-generator[1722]: Ignoring "noauto" option for root device
	[  +0.110283] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.558246] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
	[  +0.179717] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.880707] systemd-fstab-generator[2345]: Ignoring "noauto" option for root device
	[  +0.211200] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.322426] kauditd_printk_skb: 88 callbacks suppressed
	[Apr29 11:01] kauditd_printk_skb: 10 callbacks suppressed
	[Apr29 11:02] systemd-fstab-generator[3767]: Ignoring "noauto" option for root device
	[  +0.708559] systemd-fstab-generator[3803]: Ignoring "noauto" option for root device
	[  +0.292965] systemd-fstab-generator[3815]: Ignoring "noauto" option for root device
	[  +0.339194] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +5.346272] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 11:25:05 up 26 min,  0 users,  load average: 0.08, 0.04, 0.04
	Linux functional-197400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 29 11:25:00 functional-197400 kubelet[2131]: E0429 11:25:00.653530    2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.26.179.82:8441: connect: connection refused" event="&Event{ObjectMeta:{coredns-7db6d8ff4d-nqjrm.17cabb547a52a0fd  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7db6d8ff4d-nqjrm,UID:3d6217a2-a7b8-47bf-9338-975e230e7f2a,APIVersion:v1,ResourceVersion:370,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://10.244.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:functional-197400,},FirstTimestamp:2024-04-29 11:02:59.671777533 +0000 UTC m=+145.475448768,LastTimestamp:2024-04-29 11:02:59.671777533 +0000 UTC m=+145.47544
8768,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-197400,}"
	Apr 29 11:25:01 functional-197400 kubelet[2131]: E0429 11:25:01.134296    2131 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 22m13.257396501s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 29 11:25:03 functional-197400 kubelet[2131]: E0429 11:25:03.506470    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?resourceVersion=0&timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:25:03 functional-197400 kubelet[2131]: E0429 11:25:03.507474    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:25:03 functional-197400 kubelet[2131]: E0429 11:25:03.509186    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:25:03 functional-197400 kubelet[2131]: E0429 11:25:03.510096    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:25:03 functional-197400 kubelet[2131]: E0429 11:25:03.511683    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:25:03 functional-197400 kubelet[2131]: E0429 11:25:03.511772    2131 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: I0429 11:25:04.428128    2131 status_manager.go:853] "Failed to get status for pod" podUID="3b208ed450e2701a29ea259268f7cae7" pod="kube-system/kube-apiserver-functional-197400" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-197400\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.960141    2131 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.961159    2131 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.961202    2131 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.961220    2131 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.961360    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.961410    2131 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.961434    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.961460    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.961576    2131 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.961853    2131 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.961698    2131 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.962363    2131 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: I0429 11:25:04.962611    2131 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.963653    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.963761    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 29 11:25:04 functional-197400 kubelet[2131]: E0429 11:25:04.963961    2131 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:22:39.592182    7776 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0429 11:23:04.493788    7776 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:23:04.532369    7776 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:23:04.563140    7776 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:23:04.593226    7776 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:24:04.719880    7776 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:24:04.754704    7776 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:24:04.781530    7776 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:24:04.812863    7776 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-197400 -n functional-197400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-197400 -n functional-197400: exit status 2 (11.731808s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:25:05.815860    1696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-197400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (180.45s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (241.27s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-197400 -n functional-197400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-197400 -n functional-197400: exit status 2 (11.5427725s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:25:17.538143    8276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 logs -n 25
E0429 11:26:27.442089    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 logs -n 25: (3m37.3128464s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:55 UTC | 29 Apr 24 10:55 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:55 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:56 UTC | 29 Apr 24 10:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-205500 --log_dir                                     | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:57 UTC | 29 Apr 24 10:57 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-205500                                            | nospam-205500     | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:57 UTC | 29 Apr 24 10:57 UTC |
	| start   | -p functional-197400                                        | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:57 UTC | 29 Apr 24 11:01 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-197400                                        | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:01 UTC |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-197400 cache add                                 | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:09 UTC | 29 Apr 24 11:11 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-197400 cache add                                 | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:11 UTC | 29 Apr 24 11:13 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-197400 cache add                                 | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:13 UTC | 29 Apr 24 11:15 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-197400 cache add                                 | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:15 UTC | 29 Apr 24 11:16 UTC |
	|         | minikube-local-cache-test:functional-197400                 |                   |                   |         |                     |                     |
	| cache   | functional-197400 cache delete                              | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:16 UTC | 29 Apr 24 11:16 UTC |
	|         | minikube-local-cache-test:functional-197400                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:16 UTC | 29 Apr 24 11:16 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:16 UTC | 29 Apr 24 11:16 UTC |
	| ssh     | functional-197400 ssh sudo                                  | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:16 UTC |                     |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-197400                                           | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:16 UTC |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-197400 ssh                                       | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:17 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-197400 cache reload                              | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:17 UTC | 29 Apr 24 11:19 UTC |
	| ssh     | functional-197400 ssh                                       | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:19 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:19 UTC | 29 Apr 24 11:19 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:19 UTC | 29 Apr 24 11:19 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-197400 kubectl --                                | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:22 UTC |                     |
	|         | --context functional-197400                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:01:30
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:01:30.445059   13764 out.go:291] Setting OutFile to fd 884 ...
	I0429 11:01:30.445789   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:01:30.445789   13764 out.go:304] Setting ErrFile to fd 280...
	I0429 11:01:30.445789   13764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:01:30.469783   13764 out.go:298] Setting JSON to false
	I0429 11:01:30.474075   13764 start.go:129] hostinfo: {"hostname":"minikube6","uptime":29963,"bootTime":1714358527,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 11:01:30.474075   13764 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 11:01:30.478082   13764 out.go:177] * [functional-197400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 11:01:30.484053   13764 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:01:30.482999   13764 notify.go:220] Checking for updates...
	I0429 11:01:30.487059   13764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:01:30.489426   13764 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 11:01:30.492314   13764 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 11:01:30.494672   13764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:01:30.497561   13764 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:01:30.498504   13764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:01:35.797581   13764 out.go:177] * Using the hyperv driver based on existing profile
	I0429 11:01:35.800821   13764 start.go:297] selected driver: hyperv
	I0429 11:01:35.800821   13764 start.go:901] validating driver "hyperv" against &{Name:functional-197400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:functional-197400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.82 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:01:35.800821   13764 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 11:01:35.854447   13764 cni.go:84] Creating CNI manager for ""
	I0429 11:01:35.854447   13764 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 11:01:35.855168   13764 start.go:340] cluster config:
	{Name:functional-197400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-197400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.82 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:01:35.855712   13764 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:01:35.860024   13764 out.go:177] * Starting "functional-197400" primary control-plane node in "functional-197400" cluster
	I0429 11:01:35.862486   13764 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 11:01:35.862966   13764 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 11:01:35.862966   13764 cache.go:56] Caching tarball of preloaded images
	I0429 11:01:35.863088   13764 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 11:01:35.863509   13764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 11:01:35.863697   13764 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\config.json ...
	I0429 11:01:35.865973   13764 start.go:360] acquireMachinesLock for functional-197400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 11:01:35.865973   13764 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-197400"
	I0429 11:01:35.865973   13764 start.go:96] Skipping create...Using existing machine configuration
	I0429 11:01:35.866728   13764 fix.go:54] fixHost starting: 
	I0429 11:01:35.866814   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:38.565164   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:38.566072   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:38.566072   13764 fix.go:112] recreateIfNeeded on functional-197400: state=Running err=<nil>
	W0429 11:01:38.566163   13764 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 11:01:38.570099   13764 out.go:177] * Updating the running hyperv "functional-197400" VM ...
	I0429 11:01:38.572589   13764 machine.go:94] provisionDockerMachine start ...
	I0429 11:01:38.572790   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:40.728211   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:40.729260   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:40.729260   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:43.337044   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:43.338056   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:43.344719   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:43.344884   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:43.344884   13764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 11:01:43.492864   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-197400
	
	I0429 11:01:43.493032   13764 buildroot.go:166] provisioning hostname "functional-197400"
	I0429 11:01:43.493146   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:45.594418   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:45.594418   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:45.595027   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:48.145598   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:48.145598   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:48.153963   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:48.154713   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:48.154713   13764 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-197400 && echo "functional-197400" | sudo tee /etc/hostname
	I0429 11:01:48.322635   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-197400
	
	I0429 11:01:48.322635   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:50.425088   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:50.425088   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:50.426116   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:52.996130   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:52.996130   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:53.002862   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:01:53.003355   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:01:53.003457   13764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-197400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-197400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-197400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:01:53.146326   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:01:53.146326   13764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 11:01:53.146326   13764 buildroot.go:174] setting up certificates
	I0429 11:01:53.146326   13764 provision.go:84] configureAuth start
	I0429 11:01:53.146326   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:55.241860   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:01:57.763195   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:01:57.763363   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:57.763439   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:01:59.852676   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:01:59.852676   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:01:59.853320   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:02.368053   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:02.368053   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:02.368674   13764 provision.go:143] copyHostCerts
	I0429 11:02:02.369074   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 11:02:02.369383   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 11:02:02.369383   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 11:02:02.369931   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 11:02:02.370685   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 11:02:02.370685   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 11:02:02.370685   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 11:02:02.371650   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 11:02:02.372440   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 11:02:02.372519   13764 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 11:02:02.372519   13764 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 11:02:02.373046   13764 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 11:02:02.374016   13764 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-197400 san=[127.0.0.1 172.26.179.82 functional-197400 localhost minikube]
	I0429 11:02:02.495876   13764 provision.go:177] copyRemoteCerts
	I0429 11:02:02.510020   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:02:02.510020   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:04.618809   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:04.619542   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:04.619542   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:07.167725   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:07.167725   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:07.168803   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:07.282611   13764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7725535s)
	I0429 11:02:07.282611   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 11:02:07.282611   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 11:02:07.334346   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 11:02:07.334955   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 11:02:07.390221   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 11:02:07.391689   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 11:02:07.447983   13764 provision.go:87] duration metric: took 14.3015428s to configureAuth
	I0429 11:02:07.448063   13764 buildroot.go:189] setting minikube options for container-runtime
	I0429 11:02:07.448063   13764 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:02:07.448747   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:09.549776   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:09.549776   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:09.550299   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:12.117228   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:12.117228   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:12.123983   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:12.124562   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:12.124562   13764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 11:02:12.266791   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 11:02:12.267014   13764 buildroot.go:70] root file system type: tmpfs
	I0429 11:02:12.267189   13764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 11:02:12.267262   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:14.408118   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:14.408560   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:14.408560   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:16.960938   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:16.961202   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:16.967669   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:16.968259   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:16.968427   13764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 11:02:17.143647   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 11:02:17.143855   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:19.243390   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:21.747577   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:21.747577   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:21.755006   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:21.755589   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:21.755589   13764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 11:02:21.897946   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:02:21.897946   13764 machine.go:97] duration metric: took 43.3250104s to provisionDockerMachine
	I0429 11:02:21.897946   13764 start.go:293] postStartSetup for "functional-197400" (driver="hyperv")
	I0429 11:02:21.897946   13764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:02:21.911428   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:02:21.911428   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:23.981145   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:26.501393   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:26.501393   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:26.502118   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:26.619226   13764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7077235s)
	I0429 11:02:26.634064   13764 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:02:26.641916   13764 command_runner.go:130] > NAME=Buildroot
	I0429 11:02:26.641916   13764 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 11:02:26.641916   13764 command_runner.go:130] > ID=buildroot
	I0429 11:02:26.641916   13764 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 11:02:26.641916   13764 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 11:02:26.641916   13764 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 11:02:26.641916   13764 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 11:02:26.642478   13764 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 11:02:26.643334   13764 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 11:02:26.643334   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 11:02:26.644676   13764 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts -> hosts in /etc/test/nested/copy/8496
	I0429 11:02:26.644676   13764 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts -> /etc/test/nested/copy/8496/hosts
	I0429 11:02:26.657704   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8496
	I0429 11:02:26.682055   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 11:02:26.741547   13764 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts --> /etc/test/nested/copy/8496/hosts (40 bytes)
	I0429 11:02:26.792541   13764 start.go:296] duration metric: took 4.8945563s for postStartSetup
	I0429 11:02:26.792541   13764 fix.go:56] duration metric: took 50.9254062s for fixHost
	I0429 11:02:26.792541   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:28.853567   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:31.379441   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:31.379441   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:31.385529   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:31.385992   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:31.385992   13764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 11:02:31.514751   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714388551.520714576
	
	I0429 11:02:31.514751   13764 fix.go:216] guest clock: 1714388551.520714576
	I0429 11:02:31.514751   13764 fix.go:229] Guest: 2024-04-29 11:02:31.520714576 +0000 UTC Remote: 2024-04-29 11:02:26.7925417 +0000 UTC m=+56.526311901 (delta=4.728172876s)
	I0429 11:02:31.514751   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:33.581114   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:33.581995   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:33.581995   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:36.123230   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:36.123230   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:36.130279   13764 main.go:141] libmachine: Using SSH client type: native
	I0429 11:02:36.131025   13764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.82 22 <nil> <nil>}
	I0429 11:02:36.131025   13764 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714388551
	I0429 11:02:36.291751   13764 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 11:02:31 UTC 2024
	
	I0429 11:02:36.291751   13764 fix.go:236] clock set: Mon Apr 29 11:02:31 UTC 2024
	 (err=<nil>)
	I0429 11:02:36.291751   13764 start.go:83] releasing machines lock for "functional-197400", held for 1m0.4252951s
	I0429 11:02:36.291751   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:38.419288   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:38.419288   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:38.419682   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:40.996072   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:40.996072   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:41.001337   13764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:02:41.001536   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:41.013399   13764 ssh_runner.go:195] Run: cat /version.json
	I0429 11:02:41.013399   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:43.146300   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:43.158321   13764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:02:43.158321   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:43.159330   13764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
	I0429 11:02:45.835688   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:45.836385   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:45.836904   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:45.861347   13764 main.go:141] libmachine: [stdout =====>] : 172.26.179.82
	
	I0429 11:02:45.861347   13764 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:02:45.862776   13764 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
	I0429 11:02:45.935735   13764 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 11:02:45.936039   13764 ssh_runner.go:235] Completed: cat /version.json: (4.9226007s)
	I0429 11:02:45.950826   13764 ssh_runner.go:195] Run: systemctl --version
	I0429 11:02:46.011745   13764 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 11:02:46.011850   13764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0103766s)
	I0429 11:02:46.011850   13764 command_runner.go:130] > systemd 252 (252)
	I0429 11:02:46.011999   13764 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 11:02:46.026211   13764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 11:02:46.035440   13764 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 11:02:46.035904   13764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 11:02:46.048490   13764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:02:46.067930   13764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 11:02:46.067930   13764 start.go:494] detecting cgroup driver to use...
	I0429 11:02:46.068188   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:02:46.104796   13764 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 11:02:46.118218   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 11:02:46.152176   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 11:02:46.174564   13764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 11:02:46.187378   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 11:02:46.221768   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:02:46.255412   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 11:02:46.290318   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:02:46.325497   13764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:02:46.367045   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 11:02:46.403208   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 11:02:46.442281   13764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 11:02:46.478926   13764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:02:46.499867   13764 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 11:02:46.513297   13764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:02:46.549431   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:02:46.855826   13764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 11:02:46.905389   13764 start.go:494] detecting cgroup driver to use...
	I0429 11:02:46.922503   13764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 11:02:46.951373   13764 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 11:02:46.951373   13764 command_runner.go:130] > [Unit]
	I0429 11:02:46.951373   13764 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 11:02:46.951373   13764 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 11:02:46.951373   13764 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 11:02:46.951470   13764 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 11:02:46.951470   13764 command_runner.go:130] > StartLimitBurst=3
	I0429 11:02:46.951470   13764 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 11:02:46.951470   13764 command_runner.go:130] > [Service]
	I0429 11:02:46.951507   13764 command_runner.go:130] > Type=notify
	I0429 11:02:46.951507   13764 command_runner.go:130] > Restart=on-failure
	I0429 11:02:46.951507   13764 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 11:02:46.951552   13764 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 11:02:46.951552   13764 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 11:02:46.951643   13764 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 11:02:46.951643   13764 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 11:02:46.951643   13764 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 11:02:46.951687   13764 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 11:02:46.951727   13764 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 11:02:46.951727   13764 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 11:02:46.951727   13764 command_runner.go:130] > ExecStart=
	I0429 11:02:46.951791   13764 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 11:02:46.951838   13764 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 11:02:46.951838   13764 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 11:02:46.951838   13764 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 11:02:46.951838   13764 command_runner.go:130] > LimitNOFILE=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > LimitNPROC=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > LimitCORE=infinity
	I0429 11:02:46.951896   13764 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 11:02:46.951896   13764 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 11:02:46.951939   13764 command_runner.go:130] > TasksMax=infinity
	I0429 11:02:46.951939   13764 command_runner.go:130] > TimeoutStartSec=0
	I0429 11:02:46.951939   13764 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 11:02:46.951939   13764 command_runner.go:130] > Delegate=yes
	I0429 11:02:46.951939   13764 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 11:02:46.952000   13764 command_runner.go:130] > KillMode=process
	I0429 11:02:46.952000   13764 command_runner.go:130] > [Install]
	I0429 11:02:46.952000   13764 command_runner.go:130] > WantedBy=multi-user.target
	I0429 11:02:46.966498   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:02:47.010945   13764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:02:47.071693   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:02:47.111019   13764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:02:47.138047   13764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:02:47.173728   13764 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 11:02:47.188143   13764 ssh_runner.go:195] Run: which cri-dockerd
	I0429 11:02:47.196459   13764 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 11:02:47.211733   13764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 11:02:47.232274   13764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 11:02:47.282245   13764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 11:02:47.579073   13764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 11:02:47.847228   13764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 11:02:47.847310   13764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 11:02:47.911078   13764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:02:48.205114   13764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 11:03:59.569091   13764 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0429 11:03:59.569139   13764 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0429 11:03:59.569659   13764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3639778s)
	I0429 11:03:59.583436   13764 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.267173170Z" level=info msg="Starting up"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.268201295Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.269372823Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.307954249Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337171950Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337254152Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337340754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337376555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337555459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337709163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337903268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338009670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338032671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338045671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338138773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338687786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.616474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341822662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.617057   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341930064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.617127   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342068768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.617167   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342160270Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.617214   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342291773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.617232   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342561779Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.617269   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342706583Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.617269   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372846706Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.617329   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372975409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.617329   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373003310Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.617382   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373021510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.617440   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373037211Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.617474   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373149113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.617531   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373464921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.617531   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373719527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.617565   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373825230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373848630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373863930Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373890031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373906532Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373921332Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373949133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373962633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373975833Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373987533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374008834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374023234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374037835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374051935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374065235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374078736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374091236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374105436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374119237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374134237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374146237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374159238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374171938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374188938Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374210239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374222939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.617624   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374234739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618212   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374289741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.618255   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374332042Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374348242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374360142Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374503946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374551147Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374567747Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374816253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374962657Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375258464Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375340566Z" level=info msg="containerd successfully booted in 0.070853s"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.341207280Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.372935594Z" level=info msg="Loading containers: start."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.662471377Z" level=info msg="Loading containers: done."
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686025529Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686394438Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807251972Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807726683Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.294970724Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.296140626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.297893627Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298007127Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298131828Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.373783050Z" level=info msg="Starting up"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.375739052Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.618281   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.376681653Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	I0429 11:03:59.618824   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.414401489Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.618889   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443879217Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.618937   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443976817Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.618937   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444032617Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.618988   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444054617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619010   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444082717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619060   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444097417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619078   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444314317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619078   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444420417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619155   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444442717Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444454017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444480517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444729817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448106421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448213221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448460321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448545421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448576221Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448595621Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448608321Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448970822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449301222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449419922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449439222Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449472722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449525422Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449797522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449993923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450015223Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450031323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.619182   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450046523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450061223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450074823Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619705   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450089123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450104623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450119123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619788   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450132723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619877   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450147523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.619877   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450169123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.619952   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450195823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.619975   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450213523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450228423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450242323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450317723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450340823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450355723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450370223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450386623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450404923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450419423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450433523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450450223Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450473323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450488823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450586723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450768623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450878823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450899223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450913423Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451074824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451245924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451269524Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451551924Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451703024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451799224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.452213625Z" level=info msg="containerd successfully booted in 0.040825s"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.418862644Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.441473165Z" level=info msg="Loading containers: start."
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.627479942Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0429 11:03:59.620000   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.718102328Z" level=info msg="Loading containers: done."
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743113952Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743178652Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.793711400Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.794898201Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.620580   13764 command_runner.go:130] > Apr 29 11:00:03 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.128331474Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134282479Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134684380Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134803580Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.135077080Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.213787206Z" level=info msg="Starting up"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.215786608Z" level=info msg="containerd not running, starting managed containerd"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.223733215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.257297947Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285515974Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285568774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285610374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285627974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285654974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285669474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285807174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285907174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285969774Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285984174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.620671   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286011074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286128374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289099977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289240777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289384778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0429 11:03:59.621251   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289474878Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289505078Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289523778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289538678Z" level=info msg="metadata content store policy set" policy=shared
	I0429 11:03:59.621402   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289665278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289753578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289782578Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0429 11:03:59.621486   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289798778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289812978Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289861878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290650379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.621560   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290847279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0429 11:03:59.621634   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291305579Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291331579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291347879Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621660   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291388179Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291418680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291448580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291464880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291477580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291490180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291506680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291528980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291545880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291563580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291578680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291590680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291602780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291614280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291626880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291639680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291658480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291677280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291691980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291721380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291739980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291796480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291812480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.621799   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291829380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291878580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291897480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291908880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291974180Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292217280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0429 11:03:59.622343   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292341480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292357280Z" level=info msg="NRI interface is disabled by configuration."
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293132581Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293277181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0429 11:03:59.622554   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293335781Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293541382Z" level=info msg="containerd successfully booted in 0.037246s"
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:15 functional-197400 dockerd[1330]: time="2024-04-29T11:00:15.277854617Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:17 functional-197400 dockerd[1330]: time="2024-04-29T11:00:17.927543836Z" level=info msg="Loading containers: start."
	I0429 11:03:59.622656   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.112045312Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.198094793Z" level=info msg="Loading containers: done."
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222645217Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0429 11:03:59.622736   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222779217Z" level=info msg="Daemon has completed initialization"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274280866Z" level=info msg="API listen on /var/run/docker.sock"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274456266Z" level=info msg="API listen on [::]:2376"
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:18 functional-197400 systemd[1]: Started Docker Application Container Engine.
	I0429 11:03:59.622835   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120296911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120512729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120543432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.622913   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120660941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.186893035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187185759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.622993   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187211261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.188407762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215270831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623073   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215407743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623152   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215422644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623152   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215523352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623248   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280764062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280985280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281084889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281634035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643303177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643466691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643509895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643684609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.697670368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.707267679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708026943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708256862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784290483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784407793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784468198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784707718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.819747877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821078290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821252004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.826495047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985252797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985562604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985588805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985711908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068054169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623345   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068309474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068331475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068467778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166236144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166301345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166313646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.623932   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166396847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.521616981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522347196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522579101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624088   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.523240714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895048895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895152197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624173   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895172797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624251   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895676508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624251   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984381216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984458818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984485818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624328   13764 command_runner.go:130] > Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984841526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.507103229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509692523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624410   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509830323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.510118922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.796842343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797484742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0429 11:03:59.624491   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797645142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797880641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.234836529Z" level=info msg="ignoring event" container=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624699   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235676628Z" level=info msg="shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235735628Z" level=warning msg="cleaning up after shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235745428Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.451291296Z" level=info msg="ignoring event" container=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624814   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451669095Z" level=info msg="shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451851495Z" level=warning msg="cleaning up after shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451995494Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	I0429 11:03:59.624901   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.234860092Z" level=info msg="Processing signal 'terminated'"
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450791635Z" level=info msg="shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.451090435Z" level=info msg="ignoring event" container=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450876935Z" level=warning msg="cleaning up after shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	I0429 11:03:59.624988   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.451747135Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.482934541Z" level=info msg="ignoring event" container=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.484895642Z" level=info msg="shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.485295742Z" level=info msg="ignoring event" container=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625069   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486342742Z" level=warning msg="cleaning up after shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486585842Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486559842Z" level=info msg="shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	I0429 11:03:59.625207   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486853242Z" level=warning msg="cleaning up after shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	I0429 11:03:59.625290   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486923642Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625290   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.494120344Z" level=info msg="ignoring event" container=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625385   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494771444Z" level=info msg="shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	I0429 11:03:59.625464   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494880444Z" level=warning msg="cleaning up after shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	I0429 11:03:59.625464   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494940744Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.507132346Z" level=info msg="ignoring event" container=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509010947Z" level=info msg="shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	I0429 11:03:59.625557   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509090647Z" level=warning msg="cleaning up after shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	I0429 11:03:59.625678   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509108047Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625678   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531751851Z" level=info msg="ignoring event" container=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531875751Z" level=info msg="ignoring event" container=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532003151Z" level=info msg="shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532109051Z" level=warning msg="cleaning up after shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532144051Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546567054Z" level=info msg="shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546687154Z" level=warning msg="cleaning up after shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546700554Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.551454855Z" level=info msg="ignoring event" container=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.552199755Z" level=info msg="shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.553996555Z" level=warning msg="cleaning up after shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.554987256Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567471558Z" level=info msg="ignoring event" container=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567533658Z" level=info msg="ignoring event" container=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567572058Z" level=info msg="ignoring event" container=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585709762Z" level=info msg="shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585772862Z" level=warning msg="cleaning up after shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585785062Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586016062Z" level=info msg="shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586066762Z" level=warning msg="cleaning up after shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586078062Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592597763Z" level=info msg="shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592801863Z" level=warning msg="cleaning up after shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592926563Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.596528564Z" level=info msg="ignoring event" container=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.596987364Z" level=info msg="shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	I0429 11:03:59.625714   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597025164Z" level=warning msg="cleaning up after shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597035064Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.780696301Z" level=warning msg="cleanup warnings time=\"2024-04-29T11:02:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.366929116Z" level=info msg="shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.368217817Z" level=warning msg="cleaning up after shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	I0429 11:03:59.626319   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.369588017Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:53 functional-197400 dockerd[1330]: time="2024-04-29T11:02:53.370462217Z" level=info msg="ignoring event" container=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.334510807Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006
	I0429 11:03:59.626485   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.391107616Z" level=info msg="ignoring event" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393713479Z" level=info msg="shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393802388Z" level=warning msg="cleaning up after shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	I0429 11:03:59.626568   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393813489Z" level=info msg="cleaning up dead shim" namespace=moby
	I0429 11:03:59.626648   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463540623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0429 11:03:59.626648   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463722041Z" level=info msg="Daemon shutdown complete"
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463974967Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.464010370Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Consumed 6.178s CPU time.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:02:59 functional-197400 dockerd[4230]: time="2024-04-29T11:02:59.547648892Z" level=info msg="Starting up"
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 dockerd[4230]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0429 11:03:59.626712   13764 command_runner.go:130] > Apr 29 11:03:59 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	I0429 11:03:59.655510   13764 out.go:177] 
	W0429 11:03:59.658137   13764 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 10:59:29 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.267173170Z" level=info msg="Starting up"
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.268201295Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 10:59:29 functional-197400 dockerd[669]: time="2024-04-29T10:59:29.269372823Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=675
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.307954249Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337171950Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337254152Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337340754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337376555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337555459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337709163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.337903268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338009670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338032671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338045671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338138773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.338687786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341822662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.341930064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342068768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342160270Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342291773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342561779Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.342706583Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372846706Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.372975409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373003310Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373021510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373037211Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373149113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373464921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373719527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373825230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373848630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373863930Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373890031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373906532Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373921332Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373949133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373962633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373975833Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.373987533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374008834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374023234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374037835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374051935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374065235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374078736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374091236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374105436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374119237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374134237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374146237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374159238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374171938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374188938Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374210239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374222939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374234739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374289741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374332042Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374348242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374360142Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374503946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374551147Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374567747Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374816253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.374962657Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375258464Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 10:59:29 functional-197400 dockerd[675]: time="2024-04-29T10:59:29.375340566Z" level=info msg="containerd successfully booted in 0.070853s"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.341207280Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.372935594Z" level=info msg="Loading containers: start."
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.662471377Z" level=info msg="Loading containers: done."
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686025529Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.686394438Z" level=info msg="Daemon has completed initialization"
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807251972Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 10:59:30 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 10:59:30 functional-197400 dockerd[669]: time="2024-04-29T10:59:30.807726683Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.294970724Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.296140626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:00:01 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.297893627Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298007127Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:00:01 functional-197400 dockerd[669]: time="2024-04-29T11:00:01.298131828Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:00:02 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:00:02 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:00:02 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.373783050Z" level=info msg="Starting up"
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.375739052Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 11:00:02 functional-197400 dockerd[1026]: time="2024-04-29T11:00:02.376681653Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1032
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.414401489Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443879217Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.443976817Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444032617Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444054617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444082717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444097417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444314317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444420417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444442717Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444454017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444480517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.444729817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448106421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448213221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448460321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448545421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448576221Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448595621Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448608321Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.448970822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449301222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449419922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449439222Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449472722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449525422Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449797522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.449993923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450015223Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450031323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450046523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450061223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450074823Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450089123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450104623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450119123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450132723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450147523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450169123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450195823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450213523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450228423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450242323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450317723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450340823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450355723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450370223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450386623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450404923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450419423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450433523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450450223Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450473323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450488823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450586723Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450768623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450878823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450899223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.450913423Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451074824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451245924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451269524Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451551924Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451703024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.451799224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 11:00:02 functional-197400 dockerd[1032]: time="2024-04-29T11:00:02.452213625Z" level=info msg="containerd successfully booted in 0.040825s"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.418862644Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.441473165Z" level=info msg="Loading containers: start."
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.627479942Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.718102328Z" level=info msg="Loading containers: done."
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743113952Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.743178652Z" level=info msg="Daemon has completed initialization"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.793711400Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 11:00:03 functional-197400 dockerd[1026]: time="2024-04-29T11:00:03.794898201Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:03 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 11:00:13 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.128331474Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134282479Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134684380Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.134803580Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:00:13 functional-197400 dockerd[1026]: time="2024-04-29T11:00:13.135077080Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:00:14 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:00:14 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:00:14 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.213787206Z" level=info msg="Starting up"
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.215786608Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 11:00:14 functional-197400 dockerd[1330]: time="2024-04-29T11:00:14.223733215Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.257297947Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285515974Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285568774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285610374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285627974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285654974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285669474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285807174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285907174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285969774Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.285984174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286011074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.286128374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289099977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289240777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289384778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289474878Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289505078Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289523778Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289538678Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289665278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289753578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289782578Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289798778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289812978Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.289861878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290650379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.290847279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291305579Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291331579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291347879Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291388179Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291418680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291448580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291464880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291477580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291490180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291506680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291528980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291545880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291563580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291578680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291590680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291602780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291614280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291626880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291639680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291658480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291677280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291691980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291721380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291739980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291796480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291812480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291829380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291878580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291897480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291908880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.291974180Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292217280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292341480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.292357280Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293132581Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293277181Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293335781Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 11:00:14 functional-197400 dockerd[1336]: time="2024-04-29T11:00:14.293541382Z" level=info msg="containerd successfully booted in 0.037246s"
	Apr 29 11:00:15 functional-197400 dockerd[1330]: time="2024-04-29T11:00:15.277854617Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 11:00:17 functional-197400 dockerd[1330]: time="2024-04-29T11:00:17.927543836Z" level=info msg="Loading containers: start."
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.112045312Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.198094793Z" level=info msg="Loading containers: done."
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222645217Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.222779217Z" level=info msg="Daemon has completed initialization"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274280866Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 11:00:18 functional-197400 dockerd[1330]: time="2024-04-29T11:00:18.274456266Z" level=info msg="API listen on [::]:2376"
	Apr 29 11:00:18 functional-197400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120296911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120512729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120543432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.120660941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.186893035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187185759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.187211261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.188407762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215270831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215407743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215422644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.215523352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280764062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.280985280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281084889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.281634035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643303177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643466691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643509895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.643684609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.697670368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.707267679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708026943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.708256862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784290483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784407793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784468198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.784707718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.819747877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821078290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.821252004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:27 functional-197400 dockerd[1336]: time="2024-04-29T11:00:27.826495047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985252797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985562604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985588805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:48 functional-197400 dockerd[1336]: time="2024-04-29T11:00:48.985711908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068054169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068309474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068331475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.068467778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166236144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166301345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166313646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.166396847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.521616981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522347196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.522579101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.523240714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895048895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895152197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895172797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.895676508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984381216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984458818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984485818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:49 functional-197400 dockerd[1336]: time="2024-04-29T11:00:49.984841526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.507103229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509692523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.509830323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.510118922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.796842343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797484742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797645142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:00:56 functional-197400 dockerd[1336]: time="2024-04-29T11:00:56.797880641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.234836529Z" level=info msg="ignoring event" container=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235676628Z" level=info msg="shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235735628Z" level=warning msg="cleaning up after shim disconnected" id=142f76bd046abe5fe7ed2c55109d27fd70374656d49d2f0889682c77e5a6fc8e namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.235745428Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1330]: time="2024-04-29T11:01:00.451291296Z" level=info msg="ignoring event" container=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451669095Z" level=info msg="shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451851495Z" level=warning msg="cleaning up after shim disconnected" id=381e1f6e840710b8b85b15a2ad83c4c27b34ecdd63a4fcbfa964f817785462f8 namespace=moby
	Apr 29 11:01:00 functional-197400 dockerd[1336]: time="2024-04-29T11:01:00.451995494Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.234860092Z" level=info msg="Processing signal 'terminated'"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450791635Z" level=info msg="shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.451090435Z" level=info msg="ignoring event" container=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.450876935Z" level=warning msg="cleaning up after shim disconnected" id=dba70f00edea67d13a79f318714d21a61b5969390ba153b307de2154938d9378 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.451747135Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.482934541Z" level=info msg="ignoring event" container=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.484895642Z" level=info msg="shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.485295742Z" level=info msg="ignoring event" container=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486342742Z" level=warning msg="cleaning up after shim disconnected" id=1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486585842Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486559842Z" level=info msg="shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486853242Z" level=warning msg="cleaning up after shim disconnected" id=4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.486923642Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.494120344Z" level=info msg="ignoring event" container=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494771444Z" level=info msg="shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494880444Z" level=warning msg="cleaning up after shim disconnected" id=b673c2f6b46c936c1d76a1dc40302f46805a1b0c2f5a67214f8acf869abb7e0a namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.494940744Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.507132346Z" level=info msg="ignoring event" container=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509010947Z" level=info msg="shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509090647Z" level=warning msg="cleaning up after shim disconnected" id=4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.509108047Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531751851Z" level=info msg="ignoring event" container=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.531875751Z" level=info msg="ignoring event" container=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532003151Z" level=info msg="shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532109051Z" level=warning msg="cleaning up after shim disconnected" id=3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.532144051Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546567054Z" level=info msg="shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546687154Z" level=warning msg="cleaning up after shim disconnected" id=d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.546700554Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.551454855Z" level=info msg="ignoring event" container=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.552199755Z" level=info msg="shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.553996555Z" level=warning msg="cleaning up after shim disconnected" id=6c0b7e84d526f8ad4a0f6ebd2bc4396be2653094f7bb2c9aef76d99c2af1b7f7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.554987256Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567471558Z" level=info msg="ignoring event" container=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567533658Z" level=info msg="ignoring event" container=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.567572058Z" level=info msg="ignoring event" container=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585709762Z" level=info msg="shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585772862Z" level=warning msg="cleaning up after shim disconnected" id=59111f7928cddc1dfcced642289f6aa02dcbd05c3a1eedb0744ad488365ebd2e namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.585785062Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586016062Z" level=info msg="shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586066762Z" level=warning msg="cleaning up after shim disconnected" id=35e3eb80943745fe2cf201cb84b14c5f0918eefb1694e7b7ff9aa4e08e92ba82 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.586078062Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592597763Z" level=info msg="shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592801863Z" level=warning msg="cleaning up after shim disconnected" id=d810f6d2ad260fb2f849336559f3fd2ee7d913748adbca29514904ef4af9b772 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.592926563Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1330]: time="2024-04-29T11:02:48.596528564Z" level=info msg="ignoring event" container=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.596987364Z" level=info msg="shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597025164Z" level=warning msg="cleaning up after shim disconnected" id=bb62ae64971f9adef77c2f2f90fde40c06cbbca9eebce5bc05dc6c6e6b6e49a7 namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.597035064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:48 functional-197400 dockerd[1336]: time="2024-04-29T11:02:48.780696301Z" level=warning msg="cleanup warnings time=\"2024-04-29T11:02:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.366929116Z" level=info msg="shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.368217817Z" level=warning msg="cleaning up after shim disconnected" id=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1336]: time="2024-04-29T11:02:53.369588017Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:53 functional-197400 dockerd[1330]: time="2024-04-29T11:02:53.370462217Z" level=info msg="ignoring event" container=484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.334510807Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.391107616Z" level=info msg="ignoring event" container=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393713479Z" level=info msg="shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393802388Z" level=warning msg="cleaning up after shim disconnected" id=02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006 namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1336]: time="2024-04-29T11:02:58.393813489Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463540623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463722041Z" level=info msg="Daemon shutdown complete"
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.463974967Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 11:02:58 functional-197400 dockerd[1330]: time="2024-04-29T11:02:58.464010370Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 11:02:59 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:02:59 functional-197400 systemd[1]: docker.service: Consumed 6.178s CPU time.
	Apr 29 11:02:59 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:02:59 functional-197400 dockerd[4230]: time="2024-04-29T11:02:59.547648892Z" level=info msg="Starting up"
	Apr 29 11:03:59 functional-197400 dockerd[4230]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 11:03:59 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 11:03:59 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 11:03:59.659798   13764 out.go:239] * 
	W0429 11:03:59.661047   13764 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 11:03:59.665567   13764 out.go:177] 
	
	
	==> Docker <==
	Apr 29 11:27:05 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 11:27:05 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 29 11:27:05 functional-197400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Apr 29 11:27:05 functional-197400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 11:27:05 functional-197400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 11:27:05 functional-197400 dockerd[10068]: time="2024-04-29T11:27:05.711022261Z" level=info msg="Starting up"
	Apr 29 11:28:05 functional-197400 dockerd[10068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="error getting RW layer size for container ID '4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4a5511a85e18011a9fb4ea10ab2c13bd431daa9457b4c66d7e094dbb76e4fa5b'"
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="error getting RW layer size for container ID '484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '484ed283c6610bf5c36dde1ca164dda0b78a728f7dffcce8da38b51fb8a9502e'"
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="error getting RW layer size for container ID '3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3798b873721d7dc40e288667d9b9b4a901c99ccb4dd8d99905421849663864cf'"
	Apr 29 11:28:05 functional-197400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 11:28:05 functional-197400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 11:28:05 functional-197400 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="error getting RW layer size for container ID '02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '02a5b3b1c21b539abc72be7fb42404c419588f627c22ae0cacbb0669c3d58006'"
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="error getting RW layer size for container ID '4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4d36cc1a6f685e1d811f6bad5051e2bac5aeff0d018e916dd79e221194b44523'"
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="error getting RW layer size for container ID '1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1e4b44ae8d1bfb987edd4f294446db4b0229fbd7d8bb98561f04d956ff6a6edb'"
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="error getting RW layer size for container ID 'd25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/d25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd25f06609cac2e580b526ef4aac479e374596827890c02720c3fcd1a50ef818e'"
	Apr 29 11:28:05 functional-197400 cri-dockerd[1235]: time="2024-04-29T11:28:05Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-29T11:28:07Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.105646] kauditd_printk_skb: 59 callbacks suppressed
	[Apr29 11:00] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
	[  +0.209517] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +0.250233] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +2.826178] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.204930] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.214674] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.298087] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +8.281397] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.104642] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.002494] kauditd_printk_skb: 34 callbacks suppressed
	[  +0.574798] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +6.643589] systemd-fstab-generator[1722]: Ignoring "noauto" option for root device
	[  +0.110283] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.558246] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
	[  +0.179717] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.880707] systemd-fstab-generator[2345]: Ignoring "noauto" option for root device
	[  +0.211200] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.322426] kauditd_printk_skb: 88 callbacks suppressed
	[Apr29 11:01] kauditd_printk_skb: 10 callbacks suppressed
	[Apr29 11:02] systemd-fstab-generator[3767]: Ignoring "noauto" option for root device
	[  +0.708559] systemd-fstab-generator[3803]: Ignoring "noauto" option for root device
	[  +0.292965] systemd-fstab-generator[3815]: Ignoring "noauto" option for root device
	[  +0.339194] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +5.346272] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 11:29:06 up 30 min,  0 users,  load average: 0.00, 0.01, 0.01
	Linux functional-197400 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 29 11:28:58 functional-197400 kubelet[2131]: E0429 11:28:58.982596    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:28:58 functional-197400 kubelet[2131]: E0429 11:28:58.983844    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:28:58 functional-197400 kubelet[2131]: E0429 11:28:58.985451    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:28:58 functional-197400 kubelet[2131]: E0429 11:28:58.986669    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-197400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:28:58 functional-197400 kubelet[2131]: E0429 11:28:58.986710    2131 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 29 11:29:01 functional-197400 kubelet[2131]: E0429 11:29:01.179481    2131 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 26m13.302581962s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 29 11:29:04 functional-197400 kubelet[2131]: I0429 11:29:04.428504    2131 status_manager.go:853] "Failed to get status for pod" podUID="3b208ed450e2701a29ea259268f7cae7" pod="kube-system/kube-apiserver-functional-197400" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-197400\": dial tcp 172.26.179.82:8441: connect: connection refused"
	Apr 29 11:29:04 functional-197400 kubelet[2131]: E0429 11:29:04.837521    2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-197400.17cabb542cf56d81\": dial tcp 172.26.179.82:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-197400.17cabb542cf56d81  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-197400,UID:3b208ed450e2701a29ea259268f7cae7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.26.179.82:8441/readyz\": dial tcp 172.26.179.82:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-197400,},FirstTimestamp:2024-04-29 11:02:58.373823873 +0000 UTC m=+144.177495008,LastTimestam
p:2024-04-29 11:03:00.673090774 +0000 UTC m=+146.476762009,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-197400,}"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.549207    2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-197400?timeout=10s\": dial tcp 172.26.179.82:8441: connect: connection refused" interval="7s"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.972741    2131 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.972875    2131 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.972976    2131 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.981987    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.982373    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.982554    2131 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.982668    2131 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.982732    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.982754    2131 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.982796    2131 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.982853    2131 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: I0429 11:29:05.982867    2131 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.983939    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.984049    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 29 11:29:05 functional-197400 kubelet[2131]: E0429 11:29:05.984419    2131 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Apr 29 11:29:06 functional-197400 kubelet[2131]: E0429 11:29:06.179753    2131 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 26m18.302853482s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:25:29.088141    3424 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0429 11:26:05.208707    3424 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:26:05.242343    3424 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:26:05.281943    3424 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:26:05.311863    3424 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:26:05.341944    3424 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:27:05.618584    3424 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.45/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_kube-controller-manager%22%3Atrue%7D%7D": read unix @->/run/docker.sock: read: connection reset by peer
	E0429 11:28:05.735286    3424 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0429 11:28:05.766961    3424 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-197400 -n functional-197400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-197400 -n functional-197400: exit status 2 (11.9387243s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:29:06.871526    9192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-197400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (241.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-197400 config unset cpus" to be -""- but got *"W0429 11:32:24.436360   13496 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-197400 config get cpus: exit status 14 (356.2715ms)

                                                
                                                
** stderr ** 
	W0429 11:32:24.834009    5964 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-197400 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0429 11:32:24.834009    5964 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-197400 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0429 11:32:25.199761    8784 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-197400 config get cpus" to be -""- but got *"W0429 11:32:25.534739    6420 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-197400 config unset cpus" to be -""- but got *"W0429 11:32:25.850137   10396 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-197400 config get cpus: exit status 14 (337.0624ms)

                                                
                                                
** stderr ** 
	W0429 11:32:26.197753    5912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-197400 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0429 11:32:26.197753    5912 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-197400 service --namespace=default --https --url hello-node: exit status 1 (15.0229781s)

                                                
                                                
** stderr ** 
	W0429 11:33:12.668868   14032 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-197400 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-197400 service hello-node --url --format={{.IP}}: exit status 1 (15.0453375s)

                                                
                                                
** stderr ** 
	W0429 11:33:27.702291    1620 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-197400 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-197400 service hello-node --url: exit status 1 (15.031906s)

                                                
                                                
** stderr ** 
	W0429 11:33:42.720035    6588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-197400 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (69.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-dsnxf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-dsnxf -- sh -c "ping -c 1 172.26.176.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-dsnxf -- sh -c "ping -c 1 172.26.176.1": exit status 1 (10.5568617s)

                                                
                                                
-- stdout --
	PING 172.26.176.1 (172.26.176.1): 56 data bytes
	
	--- 172.26.176.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:50:46.936643    9316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.26.176.1) from pod (busybox-fc5497c4f-dsnxf): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-kxn7k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-kxn7k -- sh -c "ping -c 1 172.26.176.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-kxn7k -- sh -c "ping -c 1 172.26.176.1": exit status 1 (10.5577112s)

                                                
                                                
-- stdout --
	PING 172.26.176.1 (172.26.176.1): 56 data bytes
	
	--- 172.26.176.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:50:58.072448    6840 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.26.176.1) from pod (busybox-fc5497c4f-kxn7k): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-ndzvx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-ndzvx -- sh -c "ping -c 1 172.26.176.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-ndzvx -- sh -c "ping -c 1 172.26.176.1": exit status 1 (10.5580198s)

                                                
                                                
-- stdout --
	PING 172.26.176.1 (172.26.176.1): 56 data bytes
	
	--- 172.26.176.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:51:09.196649    2444 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.26.176.1) from pod (busybox-fc5497c4f-ndzvx): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-437800 -n ha-437800
E0429 11:51:27.449536    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-437800 -n ha-437800: (12.3340503s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 logs -n 25: (8.9758771s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-197400                    | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:36 UTC | 29 Apr 24 11:37 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-197400 image build -t     | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:36 UTC | 29 Apr 24 11:37 UTC |
	|         | localhost/my-image:functional-197400 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-197400 image ls           | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:37 UTC | 29 Apr 24 11:37 UTC |
	| delete  | -p functional-197400                 | functional-197400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:37 UTC | 29 Apr 24 11:38 UTC |
	| start   | -p ha-437800 --wait=true             | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:49 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- apply -f             | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- rollout status       | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- get pods -o          | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- get pods -o          | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | busybox-fc5497c4f-dsnxf --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | busybox-fc5497c4f-kxn7k --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | busybox-fc5497c4f-ndzvx --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | busybox-fc5497c4f-dsnxf --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | busybox-fc5497c4f-kxn7k --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | busybox-fc5497c4f-ndzvx --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | busybox-fc5497c4f-dsnxf -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | busybox-fc5497c4f-kxn7k -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | busybox-fc5497c4f-ndzvx -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- get pods -o          | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | busybox-fc5497c4f-dsnxf              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC |                     |
	|         | busybox-fc5497c4f-dsnxf -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.26.176.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC | 29 Apr 24 11:50 UTC |
	|         | busybox-fc5497c4f-kxn7k              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:50 UTC |                     |
	|         | busybox-fc5497c4f-kxn7k -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.26.176.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:51 UTC | 29 Apr 24 11:51 UTC |
	|         | busybox-fc5497c4f-ndzvx              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-437800 -- exec                 | ha-437800         | minikube6\jenkins | v1.33.0 | 29 Apr 24 11:51 UTC |                     |
	|         | busybox-fc5497c4f-ndzvx -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.26.176.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:38:36
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:38:36.492320    5624 out.go:291] Setting OutFile to fd 1208 ...
	I0429 11:38:36.492320    5624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:38:36.492320    5624 out.go:304] Setting ErrFile to fd 988...
	I0429 11:38:36.492320    5624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:38:36.515311    5624 out.go:298] Setting JSON to false
	I0429 11:38:36.518304    5624 start.go:129] hostinfo: {"hostname":"minikube6","uptime":32189,"bootTime":1714358527,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 11:38:36.518304    5624 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 11:38:36.525131    5624 out.go:177] * [ha-437800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 11:38:36.528092    5624 notify.go:220] Checking for updates...
	I0429 11:38:36.530761    5624 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:38:36.533288    5624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:38:36.535997    5624 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 11:38:36.538664    5624 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 11:38:36.540913    5624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:38:36.543678    5624 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:38:41.922743    5624 out.go:177] * Using the hyperv driver based on user configuration
	I0429 11:38:41.926389    5624 start.go:297] selected driver: hyperv
	I0429 11:38:41.926389    5624 start.go:901] validating driver "hyperv" against <nil>
	I0429 11:38:41.926389    5624 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 11:38:41.977395    5624 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 11:38:41.978641    5624 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 11:38:41.978815    5624 cni.go:84] Creating CNI manager for ""
	I0429 11:38:41.978815    5624 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 11:38:41.978815    5624 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 11:38:41.978815    5624 start.go:340] cluster config:
	{Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:38:41.979347    5624 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:38:41.986830    5624 out.go:177] * Starting "ha-437800" primary control-plane node in "ha-437800" cluster
	I0429 11:38:41.988718    5624 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 11:38:41.989238    5624 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 11:38:41.989441    5624 cache.go:56] Caching tarball of preloaded images
	I0429 11:38:41.989585    5624 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 11:38:41.989585    5624 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 11:38:41.990189    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:38:41.990189    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json: {Name:mkde8b2acced2302a59bd62b727de17f46014934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:38:41.991691    5624 start.go:360] acquireMachinesLock for ha-437800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 11:38:41.991691    5624 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-437800"
	I0429 11:38:41.991691    5624 start.go:93] Provisioning new machine with config: &{Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:38:41.992220    5624 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 11:38:41.994443    5624 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 11:38:41.994443    5624 start.go:159] libmachine.API.Create for "ha-437800" (driver="hyperv")
	I0429 11:38:41.994443    5624 client.go:168] LocalClient.Create starting
	I0429 11:38:41.995010    5624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 11:38:41.995010    5624 main.go:141] libmachine: Decoding PEM data...
	I0429 11:38:41.995010    5624 main.go:141] libmachine: Parsing certificate...
	I0429 11:38:41.995010    5624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 11:38:41.995010    5624 main.go:141] libmachine: Decoding PEM data...
	I0429 11:38:41.995010    5624 main.go:141] libmachine: Parsing certificate...
	I0429 11:38:41.995010    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 11:38:44.101410    5624 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 11:38:44.101410    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:44.101410    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 11:38:45.875269    5624 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 11:38:45.876150    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:45.876150    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 11:38:47.363130    5624 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 11:38:47.363130    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:47.363730    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 11:38:50.887488    5624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 11:38:50.887488    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:50.890162    5624 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 11:38:51.418530    5624 main.go:141] libmachine: Creating SSH key...
	I0429 11:38:51.592762    5624 main.go:141] libmachine: Creating VM...
	I0429 11:38:51.592762    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 11:38:54.398774    5624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 11:38:54.398907    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:54.398907    5624 main.go:141] libmachine: Using switch "Default Switch"
	I0429 11:38:54.399115    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 11:38:56.200408    5624 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 11:38:56.201159    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:56.201159    5624 main.go:141] libmachine: Creating VHD
	I0429 11:38:56.201159    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 11:38:59.838589    5624 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6FFE2E55-97CA-42A8-86D7-9C44E847BFA0
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 11:38:59.838717    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:59.838717    5624 main.go:141] libmachine: Writing magic tar header
	I0429 11:38:59.838717    5624 main.go:141] libmachine: Writing SSH key tar header
	I0429 11:38:59.848739    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 11:39:02.955462    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:02.956253    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:02.956253    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\disk.vhd' -SizeBytes 20000MB
	I0429 11:39:05.455031    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:05.455031    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:05.455848    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-437800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 11:39:09.166874    5624 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-437800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 11:39:09.166935    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:09.166935    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-437800 -DynamicMemoryEnabled $false
	I0429 11:39:11.396816    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:11.396816    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:11.397172    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-437800 -Count 2
	I0429 11:39:13.561606    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:13.561606    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:13.561840    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-437800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\boot2docker.iso'
	I0429 11:39:16.069448    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:16.069701    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:16.069793    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-437800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\disk.vhd'
	I0429 11:39:18.659398    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:18.659398    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:18.659398    5624 main.go:141] libmachine: Starting VM...
	I0429 11:39:18.659801    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-437800
	I0429 11:39:21.704077    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:21.704545    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:21.704545    5624 main.go:141] libmachine: Waiting for host to start...
	I0429 11:39:21.704545    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:23.838160    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:23.839123    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:23.839188    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:39:26.244651    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:26.244651    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:27.244955    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:29.366727    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:29.366727    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:29.366727    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:39:31.867953    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:31.867953    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:32.877964    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:34.972321    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:34.972321    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:34.972849    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:39:37.425869    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:37.425869    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:38.433128    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:40.586000    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:40.586595    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:40.586595    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:39:43.083143    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:43.083306    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:44.095030    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:46.280115    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:46.280589    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:46.280806    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:39:48.848982    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:39:48.848982    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:48.848982    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:50.915924    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:50.915987    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:50.915987    5624 machine.go:94] provisionDockerMachine start ...
	I0429 11:39:50.915987    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:53.034378    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:53.035177    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:53.035177    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:39:55.518359    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:39:55.518359    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:55.525145    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:39:55.535483    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:39:55.535483    5624 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 11:39:55.674222    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 11:39:55.674353    5624 buildroot.go:166] provisioning hostname "ha-437800"
	I0429 11:39:55.674353    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:57.743353    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:57.743353    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:57.744178    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:00.242402    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:00.242402    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:00.249132    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:00.249807    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:00.249807    5624 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-437800 && echo "ha-437800" | sudo tee /etc/hostname
	I0429 11:40:00.402652    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-437800
	
	I0429 11:40:00.402652    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:02.457677    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:02.457677    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:02.457778    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:05.037289    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:05.037289    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:05.043755    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:05.044480    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:05.044480    5624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-437800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-437800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-437800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:40:05.199203    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:40:05.199203    5624 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 11:40:05.199323    5624 buildroot.go:174] setting up certificates
	I0429 11:40:05.199323    5624 provision.go:84] configureAuth start
	I0429 11:40:05.199449    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:07.276116    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:07.276116    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:07.277038    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:09.834337    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:09.835274    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:09.835414    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:11.887732    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:11.887732    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:11.888727    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:14.415902    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:14.415902    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:14.416510    5624 provision.go:143] copyHostCerts
	I0429 11:40:14.416679    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 11:40:14.417351    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 11:40:14.417433    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 11:40:14.417558    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 11:40:14.419225    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 11:40:14.419438    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 11:40:14.419549    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 11:40:14.419687    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 11:40:14.420888    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 11:40:14.421372    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 11:40:14.421488    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 11:40:14.421878    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 11:40:14.422828    5624 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-437800 san=[127.0.0.1 172.26.176.3 ha-437800 localhost minikube]
	I0429 11:40:14.754918    5624 provision.go:177] copyRemoteCerts
	I0429 11:40:14.770646    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:40:14.770646    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:16.835461    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:16.835678    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:16.835678    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:19.356157    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:19.356221    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:19.356221    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:40:19.466257    5624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6955733s)
	I0429 11:40:19.466257    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 11:40:19.466749    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 11:40:19.518151    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 11:40:19.518450    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 11:40:19.566660    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 11:40:19.566959    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 11:40:19.612368    5624 provision.go:87] duration metric: took 14.4129311s to configureAuth
	I0429 11:40:19.612368    5624 buildroot.go:189] setting minikube options for container-runtime
	I0429 11:40:19.612996    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:40:19.613076    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:21.640535    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:21.641488    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:21.641652    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:24.137961    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:24.138825    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:24.145291    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:24.145556    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:24.145556    5624 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 11:40:24.283831    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 11:40:24.283831    5624 buildroot.go:70] root file system type: tmpfs
	I0429 11:40:24.284096    5624 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 11:40:24.284096    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:26.320672    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:26.321411    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:26.321411    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:28.814670    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:28.814670    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:28.821837    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:28.821975    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:28.821975    5624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 11:40:28.986150    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 11:40:28.986269    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:31.033604    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:31.033604    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:31.033663    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:33.490232    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:33.491149    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:33.497204    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:33.497888    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:33.497888    5624 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 11:40:35.657947    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 11:40:35.658013    5624 machine.go:97] duration metric: took 44.7416719s to provisionDockerMachine
	I0429 11:40:35.658013    5624 client.go:171] duration metric: took 1m53.6626719s to LocalClient.Create
	I0429 11:40:35.658149    5624 start.go:167] duration metric: took 1m53.6627335s to libmachine.API.Create "ha-437800"
	I0429 11:40:35.658197    5624 start.go:293] postStartSetup for "ha-437800" (driver="hyperv")
	I0429 11:40:35.658220    5624 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:40:35.673328    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:40:35.673553    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:37.742844    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:37.743860    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:37.743947    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:40.265000    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:40.265870    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:40.266066    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:40:40.369670    5624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6959646s)
	I0429 11:40:40.384152    5624 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:40:40.392855    5624 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 11:40:40.392979    5624 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 11:40:40.393675    5624 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 11:40:40.395409    5624 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 11:40:40.395409    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 11:40:40.412804    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 11:40:40.432921    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 11:40:40.486575    5624 start.go:296] duration metric: took 4.8283172s for postStartSetup
	I0429 11:40:40.489565    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:42.586663    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:42.587676    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:42.587901    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:45.124860    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:45.124860    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:45.124934    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:40:45.127211    5624 start.go:128] duration metric: took 2m3.1340179s to createHost
	I0429 11:40:45.127747    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:47.193834    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:47.194268    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:47.194268    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:49.689959    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:49.690734    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:49.697660    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:49.698440    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:49.698440    5624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 11:40:49.835370    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714390849.839550148
	
	I0429 11:40:49.835370    5624 fix.go:216] guest clock: 1714390849.839550148
	I0429 11:40:49.835370    5624 fix.go:229] Guest: 2024-04-29 11:40:49.839550148 +0000 UTC Remote: 2024-04-29 11:40:45.1277475 +0000 UTC m=+128.818450601 (delta=4.711802648s)
	I0429 11:40:49.835370    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:51.954065    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:51.954065    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:51.954420    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:54.417699    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:54.418398    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:54.423972    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:54.424723    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:54.424723    5624 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714390849
	I0429 11:40:54.574529    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 11:40:49 UTC 2024
	
	I0429 11:40:54.574529    5624 fix.go:236] clock set: Mon Apr 29 11:40:49 UTC 2024
	 (err=<nil>)
	I0429 11:40:54.574529    5624 start.go:83] releasing machines lock for "ha-437800", held for 2m12.5817912s
	I0429 11:40:54.575064    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:56.652046    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:56.652579    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:56.652579    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:59.190683    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:59.191428    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:59.196972    5624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:40:59.196972    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:59.208518    5624 ssh_runner.go:195] Run: cat /version.json
	I0429 11:40:59.208698    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:41:01.369746    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:41:01.369746    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:41:01.369846    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:41:01.369917    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:41:01.369917    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:41:01.369917    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:41:04.032433    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:41:04.033016    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:41:04.034103    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:41:04.052594    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:41:04.052594    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:41:04.052594    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:41:04.128009    5624 ssh_runner.go:235] Completed: cat /version.json: (4.919372s)
	I0429 11:41:04.142024    5624 ssh_runner.go:195] Run: systemctl --version
	I0429 11:41:04.213727    5624 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0167165s)
	I0429 11:41:04.226439    5624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 11:41:04.238496    5624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 11:41:04.252323    5624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:41:04.286115    5624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 11:41:04.286115    5624 start.go:494] detecting cgroup driver to use...
	I0429 11:41:04.286115    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:41:04.336994    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 11:41:04.373518    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 11:41:04.392150    5624 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 11:41:04.404506    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 11:41:04.438687    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:41:04.475781    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 11:41:04.512036    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:41:04.543440    5624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:41:04.582376    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 11:41:04.615588    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 11:41:04.648793    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 11:41:04.681904    5624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:41:04.715181    5624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:41:04.747255    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:41:04.962615    5624 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 11:41:04.994778    5624 start.go:494] detecting cgroup driver to use...
	I0429 11:41:05.008746    5624 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 11:41:05.052500    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:41:05.091521    5624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:41:05.144830    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:41:05.181982    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:41:05.219071    5624 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 11:41:05.281381    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:41:05.303749    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:41:05.355512    5624 ssh_runner.go:195] Run: which cri-dockerd
	I0429 11:41:05.374724    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 11:41:05.393089    5624 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 11:41:05.448059    5624 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 11:41:05.676687    5624 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 11:41:05.887364    5624 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 11:41:05.887634    5624 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 11:41:05.938625    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:41:06.158705    5624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 11:41:08.681433    5624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.522708s)
	I0429 11:41:08.696709    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 11:41:08.734729    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 11:41:08.773784    5624 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 11:41:09.013987    5624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 11:41:09.232810    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:41:09.457623    5624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 11:41:09.502220    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 11:41:09.539328    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:41:09.775032    5624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 11:41:09.890046    5624 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 11:41:09.904827    5624 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 11:41:09.914221    5624 start.go:562] Will wait 60s for crictl version
	I0429 11:41:09.928454    5624 ssh_runner.go:195] Run: which crictl
	I0429 11:41:09.947490    5624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 11:41:10.001368    5624 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 11:41:10.012377    5624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 11:41:10.054952    5624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 11:41:10.090454    5624 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 11:41:10.090454    5624 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 11:41:10.094513    5624 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 11:41:10.094513    5624 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 11:41:10.094513    5624 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 11:41:10.094513    5624 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:da:8e:53 Flags:up|broadcast|multicast|running}
	I0429 11:41:10.097500    5624 ip.go:210] interface addr: fe80::e4d4:6d70:21fb:68f3/64
	I0429 11:41:10.097500    5624 ip.go:210] interface addr: 172.26.176.1/20
	I0429 11:41:10.109499    5624 ssh_runner.go:195] Run: grep 172.26.176.1	host.minikube.internal$ /etc/hosts
	I0429 11:41:10.117354    5624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:41:10.153511    5624 kubeadm.go:877] updating cluster {Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 11:41:10.154079    5624 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 11:41:10.163447    5624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 11:41:10.186795    5624 docker.go:685] Got preloaded images: 
	I0429 11:41:10.186795    5624 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 11:41:10.198623    5624 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 11:41:10.239584    5624 ssh_runner.go:195] Run: which lz4
	I0429 11:41:10.246301    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 11:41:10.260390    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 11:41:10.266895    5624 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 11:41:10.267020    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 11:41:12.280808    5624 docker.go:649] duration metric: took 2.0342758s to copy over tarball
	I0429 11:41:12.293601    5624 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 11:41:21.182274    5624 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.8886036s)
	I0429 11:41:21.182348    5624 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 11:41:21.254351    5624 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 11:41:21.274179    5624 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 11:41:21.330833    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:41:21.550712    5624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 11:41:24.943343    5624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3440535s)
	I0429 11:41:24.953411    5624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 11:41:24.978211    5624 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 11:41:24.978211    5624 cache_images.go:84] Images are preloaded, skipping loading
	I0429 11:41:24.978211    5624 kubeadm.go:928] updating node { 172.26.176.3 8443 v1.30.0 docker true true} ...
	I0429 11:41:24.978211    5624 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-437800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.176.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 11:41:24.987539    5624 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 11:41:25.022297    5624 cni.go:84] Creating CNI manager for ""
	I0429 11:41:25.022297    5624 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 11:41:25.022297    5624 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 11:41:25.022450    5624 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.176.3 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-437800 NodeName:ha-437800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.176.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.176.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 11:41:25.022518    5624 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.176.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-437800"
	  kubeletExtraArgs:
	    node-ip: 172.26.176.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.176.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 11:41:25.022717    5624 kube-vip.go:111] generating kube-vip config ...
	I0429 11:41:25.035746    5624 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 11:41:25.064321    5624 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 11:41:25.064321    5624 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.26.191.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0429 11:41:25.078782    5624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 11:41:25.096459    5624 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 11:41:25.108782    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 11:41:25.128531    5624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 11:41:25.159904    5624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 11:41:25.191951    5624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0429 11:41:25.224116    5624 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0429 11:41:25.269964    5624 ssh_runner.go:195] Run: grep 172.26.191.254	control-plane.minikube.internal$ /etc/hosts
	I0429 11:41:25.276712    5624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.191.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:41:25.314177    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:41:25.541266    5624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:41:25.573048    5624 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800 for IP: 172.26.176.3
	I0429 11:41:25.573048    5624 certs.go:194] generating shared ca certs ...
	I0429 11:41:25.573048    5624 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:25.573048    5624 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 11:41:25.574034    5624 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 11:41:25.574034    5624 certs.go:256] generating profile certs ...
	I0429 11:41:25.575143    5624 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.key
	I0429 11:41:25.575263    5624 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.crt with IP's: []
	I0429 11:41:25.933264    5624 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.crt ...
	I0429 11:41:25.933264    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.crt: {Name:mke3f60849b28a4fba6b85cd3f79b6cb8b4dd390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:25.934741    5624 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.key ...
	I0429 11:41:25.934741    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.key: {Name:mk16731689887025c819e8844cbaf6132d0c6269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:25.935261    5624 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.daf43dc4
	I0429 11:41:25.936337    5624 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.daf43dc4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.26.176.3 172.26.191.254]
	I0429 11:41:26.150290    5624 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.daf43dc4 ...
	I0429 11:41:26.150290    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.daf43dc4: {Name:mk0bd09318c9f647250117ce8a1458a877442397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:26.151481    5624 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.daf43dc4 ...
	I0429 11:41:26.151481    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.daf43dc4: {Name:mk8f0755d767ce5ab827f02650006a37ddc122fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:26.152659    5624 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.daf43dc4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt
	I0429 11:41:26.167218    5624 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.daf43dc4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key
	I0429 11:41:26.168561    5624 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key
	I0429 11:41:26.169279    5624 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt with IP's: []
	I0429 11:41:26.418072    5624 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt ...
	I0429 11:41:26.418072    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt: {Name:mk96bc7760b5d88b39ffdf07f71258ba50cc8f8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:26.420002    5624 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key ...
	I0429 11:41:26.420002    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key: {Name:mka8851bea0e8e606285ced0ac7e8dc119877f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:26.420002    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 11:41:26.421317    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 11:41:26.421480    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 11:41:26.421687    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 11:41:26.421850    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 11:41:26.422147    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 11:41:26.422304    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 11:41:26.429525    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 11:41:26.430703    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem (1338 bytes)
	W0429 11:41:26.431724    5624 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496_empty.pem, impossibly tiny 0 bytes
	I0429 11:41:26.431724    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0429 11:41:26.431724    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 11:41:26.431724    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 11:41:26.433256    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 11:41:26.433992    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem (1708 bytes)
	I0429 11:41:26.434212    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem -> /usr/share/ca-certificates/8496.pem
	I0429 11:41:26.434339    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /usr/share/ca-certificates/84962.pem
	I0429 11:41:26.434339    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:41:26.435819    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 11:41:26.483144    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 11:41:26.528404    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 11:41:26.574278    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 11:41:26.625982    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 11:41:26.677103    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 11:41:26.730505    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 11:41:26.784244    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 11:41:26.837778    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem --> /usr/share/ca-certificates/8496.pem (1338 bytes)
	I0429 11:41:26.887539    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /usr/share/ca-certificates/84962.pem (1708 bytes)
	I0429 11:41:26.935317    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 11:41:26.994853    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 11:41:27.041600    5624 ssh_runner.go:195] Run: openssl version
	I0429 11:41:27.063935    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8496.pem && ln -fs /usr/share/ca-certificates/8496.pem /etc/ssl/certs/8496.pem"
	I0429 11:41:27.099134    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8496.pem
	I0429 11:41:27.108785    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 11:41:27.122468    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8496.pem
	I0429 11:41:27.145143    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8496.pem /etc/ssl/certs/51391683.0"
	I0429 11:41:27.184495    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84962.pem && ln -fs /usr/share/ca-certificates/84962.pem /etc/ssl/certs/84962.pem"
	I0429 11:41:27.218475    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84962.pem
	I0429 11:41:27.225873    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 11:41:27.238956    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84962.pem
	I0429 11:41:27.260960    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84962.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 11:41:27.297915    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 11:41:27.334610    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:41:27.342064    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:41:27.357282    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:41:27.379168    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 11:41:27.413443    5624 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 11:41:27.422107    5624 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 11:41:27.422107    5624 kubeadm.go:391] StartCluster: {Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:41:27.432503    5624 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 11:41:27.476980    5624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 11:41:27.511165    5624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 11:41:27.544518    5624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 11:41:27.564322    5624 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 11:41:27.564322    5624 kubeadm.go:156] found existing configuration files:
	
	I0429 11:41:27.580083    5624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 11:41:27.598316    5624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 11:41:27.612051    5624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 11:41:27.644522    5624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 11:41:27.663384    5624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 11:41:27.674001    5624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 11:41:27.707847    5624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 11:41:27.724859    5624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 11:41:27.737842    5624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 11:41:27.773371    5624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 11:41:27.791132    5624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 11:41:27.804487    5624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 11:41:27.824585    5624 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 11:41:28.316373    5624 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 11:41:43.824833    5624 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 11:41:43.825060    5624 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 11:41:43.825330    5624 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 11:41:43.825330    5624 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 11:41:43.825330    5624 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 11:41:43.825861    5624 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 11:41:43.828285    5624 out.go:204]   - Generating certificates and keys ...
	I0429 11:41:43.828485    5624 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 11:41:43.828597    5624 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 11:41:43.828696    5624 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 11:41:43.828782    5624 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 11:41:43.828782    5624 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 11:41:43.828782    5624 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 11:41:43.828782    5624 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 11:41:43.829324    5624 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-437800 localhost] and IPs [172.26.176.3 127.0.0.1 ::1]
	I0429 11:41:43.829569    5624 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 11:41:43.829795    5624 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-437800 localhost] and IPs [172.26.176.3 127.0.0.1 ::1]
	I0429 11:41:43.829795    5624 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 11:41:43.829795    5624 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 11:41:43.829795    5624 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 11:41:43.830468    5624 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 11:41:43.830468    5624 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 11:41:43.830468    5624 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 11:41:43.830468    5624 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 11:41:43.831056    5624 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 11:41:43.831346    5624 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 11:41:43.831346    5624 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 11:41:43.831346    5624 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 11:41:43.834901    5624 out.go:204]   - Booting up control plane ...
	I0429 11:41:43.834901    5624 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 11:41:43.835477    5624 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 11:41:43.835477    5624 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 11:41:43.835477    5624 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 11:41:43.836112    5624 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 11:41:43.836112    5624 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 11:41:43.836112    5624 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 11:41:43.836690    5624 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 11:41:43.836847    5624 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00263247s
	I0429 11:41:43.836890    5624 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 11:41:43.836890    5624 kubeadm.go:309] [api-check] The API server is healthy after 8.766804148s
	I0429 11:41:43.836890    5624 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 11:41:43.837638    5624 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 11:41:43.837801    5624 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 11:41:43.838062    5624 kubeadm.go:309] [mark-control-plane] Marking the node ha-437800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 11:41:43.838362    5624 kubeadm.go:309] [bootstrap-token] Using token: h7cu04.z6k8bpxubty5dxx7
	I0429 11:41:43.841130    5624 out.go:204]   - Configuring RBAC rules ...
	I0429 11:41:43.842095    5624 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 11:41:43.842095    5624 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 11:41:43.842095    5624 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 11:41:43.842095    5624 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 11:41:43.842095    5624 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 11:41:43.843264    5624 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 11:41:43.843432    5624 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 11:41:43.843432    5624 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 11:41:43.843796    5624 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 11:41:43.843863    5624 kubeadm.go:309] 
	I0429 11:41:43.843908    5624 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 11:41:43.843908    5624 kubeadm.go:309] 
	I0429 11:41:43.844106    5624 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 11:41:43.844106    5624 kubeadm.go:309] 
	I0429 11:41:43.844106    5624 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 11:41:43.844106    5624 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 11:41:43.844106    5624 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 11:41:43.844106    5624 kubeadm.go:309] 
	I0429 11:41:43.844106    5624 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 11:41:43.844106    5624 kubeadm.go:309] 
	I0429 11:41:43.844106    5624 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 11:41:43.844106    5624 kubeadm.go:309] 
	I0429 11:41:43.844943    5624 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 11:41:43.845052    5624 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 11:41:43.845052    5624 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 11:41:43.845052    5624 kubeadm.go:309] 
	I0429 11:41:43.845052    5624 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 11:41:43.845625    5624 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 11:41:43.845754    5624 kubeadm.go:309] 
	I0429 11:41:43.845883    5624 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token h7cu04.z6k8bpxubty5dxx7 \
	I0429 11:41:43.846165    5624 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a \
	I0429 11:41:43.846165    5624 kubeadm.go:309] 	--control-plane 
	I0429 11:41:43.846165    5624 kubeadm.go:309] 
	I0429 11:41:43.846472    5624 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 11:41:43.846507    5624 kubeadm.go:309] 
	I0429 11:41:43.846624    5624 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token h7cu04.z6k8bpxubty5dxx7 \
	I0429 11:41:43.846624    5624 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a 
	I0429 11:41:43.846624    5624 cni.go:84] Creating CNI manager for ""
	I0429 11:41:43.846624    5624 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 11:41:43.850556    5624 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 11:41:43.871289    5624 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 11:41:43.879887    5624 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 11:41:43.879887    5624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 11:41:43.931050    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 11:41:44.666335    5624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 11:41:44.681327    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:44.681327    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-437800 minikube.k8s.io/updated_at=2024_04_29T11_41_44_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d minikube.k8s.io/name=ha-437800 minikube.k8s.io/primary=true
	I0429 11:41:44.691497    5624 ops.go:34] apiserver oom_adj: -16
	I0429 11:41:44.909700    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:45.423884    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:45.923645    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:46.426621    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:46.915461    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:47.421871    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:47.924043    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:48.417440    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:48.917835    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:49.420539    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:49.911588    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:50.425193    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:50.924922    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:51.411931    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:51.911074    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:52.416294    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:52.913764    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:53.416285    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:53.923201    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:54.421473    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:54.921538    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:55.422506    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:55.911217    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:56.417742    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:56.583171    5624 kubeadm.go:1107] duration metric: took 11.9167432s to wait for elevateKubeSystemPrivileges
	W0429 11:41:56.583263    5624 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 11:41:56.583263    5624 kubeadm.go:393] duration metric: took 29.1609291s to StartCluster
	I0429 11:41:56.583263    5624 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:56.583263    5624 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:41:56.584636    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:56.586259    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 11:41:56.586259    5624 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:41:56.586259    5624 start.go:240] waiting for startup goroutines ...
	I0429 11:41:56.586259    5624 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 11:41:56.586259    5624 addons.go:69] Setting storage-provisioner=true in profile "ha-437800"
	I0429 11:41:56.586790    5624 addons.go:234] Setting addon storage-provisioner=true in "ha-437800"
	I0429 11:41:56.586844    5624 addons.go:69] Setting default-storageclass=true in profile "ha-437800"
	I0429 11:41:56.587008    5624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-437800"
	I0429 11:41:56.587034    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:41:56.587034    5624 host.go:66] Checking if "ha-437800" exists ...
	I0429 11:41:56.587326    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:41:56.588097    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:41:56.744040    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.26.176.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 11:41:57.066974    5624 start.go:946] {"host.minikube.internal": 172.26.176.1} host record injected into CoreDNS's ConfigMap
	I0429 11:41:58.806452    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:41:58.806452    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:41:58.809286    5624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 11:41:58.806452    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:41:58.809286    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:41:58.810293    5624 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:41:58.812294    5624 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 11:41:58.812294    5624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 11:41:58.813286    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:41:58.813286    5624 kapi.go:59] client config for ha-437800: &rest.Config{Host:"https://172.26.191.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-437800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-437800\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 11:41:58.814285    5624 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 11:41:58.815310    5624 addons.go:234] Setting addon default-storageclass=true in "ha-437800"
	I0429 11:41:58.815310    5624 host.go:66] Checking if "ha-437800" exists ...
	I0429 11:41:58.816291    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:42:01.070623    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:42:01.070773    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:01.070872    5624 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 11:42:01.070872    5624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 11:42:01.070872    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:42:01.175648    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:42:01.175874    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:01.176272    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:42:03.268670    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:42:03.268670    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:03.268670    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:42:03.897628    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:42:03.897628    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:03.898166    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:42:04.074465    5624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 11:42:05.868718    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:42:05.868718    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:05.870276    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:42:06.015850    5624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 11:42:06.199503    5624 round_trippers.go:463] GET https://172.26.191.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 11:42:06.199503    5624 round_trippers.go:469] Request Headers:
	I0429 11:42:06.199503    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:42:06.199503    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:42:06.224158    5624 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0429 11:42:06.225193    5624 round_trippers.go:463] PUT https://172.26.191.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 11:42:06.225193    5624 round_trippers.go:469] Request Headers:
	I0429 11:42:06.225193    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:42:06.225193    5624 round_trippers.go:473]     Content-Type: application/json
	I0429 11:42:06.225193    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:42:06.232172    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:42:06.236167    5624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 11:42:06.240156    5624 addons.go:505] duration metric: took 9.6538218s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 11:42:06.240156    5624 start.go:245] waiting for cluster config update ...
	I0429 11:42:06.240156    5624 start.go:254] writing updated cluster config ...
	I0429 11:42:06.245157    5624 out.go:177] 
	I0429 11:42:06.254493    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:42:06.254629    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:42:06.260511    5624 out.go:177] * Starting "ha-437800-m02" control-plane node in "ha-437800" cluster
	I0429 11:42:06.264045    5624 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 11:42:06.264045    5624 cache.go:56] Caching tarball of preloaded images
	I0429 11:42:06.264670    5624 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 11:42:06.264670    5624 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 11:42:06.265067    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:42:06.268036    5624 start.go:360] acquireMachinesLock for ha-437800-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 11:42:06.268036    5624 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-437800-m02"
	I0429 11:42:06.268036    5624 start.go:93] Provisioning new machine with config: &{Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:42:06.269042    5624 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 11:42:06.272037    5624 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 11:42:06.272037    5624 start.go:159] libmachine.API.Create for "ha-437800" (driver="hyperv")
	I0429 11:42:06.272037    5624 client.go:168] LocalClient.Create starting
	I0429 11:42:06.272037    5624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 11:42:06.273044    5624 main.go:141] libmachine: Decoding PEM data...
	I0429 11:42:06.273044    5624 main.go:141] libmachine: Parsing certificate...
	I0429 11:42:06.273044    5624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 11:42:06.273044    5624 main.go:141] libmachine: Decoding PEM data...
	I0429 11:42:06.273044    5624 main.go:141] libmachine: Parsing certificate...
	I0429 11:42:06.273044    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 11:42:08.217300    5624 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 11:42:08.217300    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:08.218296    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 11:42:09.980838    5624 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 11:42:09.980838    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:09.981497    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 11:42:11.510951    5624 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 11:42:11.511620    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:11.511620    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 11:42:15.110841    5624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 11:42:15.110841    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:15.114008    5624 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 11:42:15.629405    5624 main.go:141] libmachine: Creating SSH key...
	I0429 11:42:15.805205    5624 main.go:141] libmachine: Creating VM...
	I0429 11:42:15.806211    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 11:42:18.674560    5624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 11:42:18.675156    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:18.675524    5624 main.go:141] libmachine: Using switch "Default Switch"
	I0429 11:42:18.675805    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 11:42:20.488894    5624 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 11:42:20.489932    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:20.490021    5624 main.go:141] libmachine: Creating VHD
	I0429 11:42:20.490021    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 11:42:24.145856    5624 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 90CD0A4C-0EA6-4A1A-B2E9-1522C726FEB7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 11:42:24.145856    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:24.145856    5624 main.go:141] libmachine: Writing magic tar header
	I0429 11:42:24.145856    5624 main.go:141] libmachine: Writing SSH key tar header
	I0429 11:42:24.157295    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 11:42:27.312988    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:27.312988    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:27.312988    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\disk.vhd' -SizeBytes 20000MB
	I0429 11:42:29.835462    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:29.835462    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:29.835462    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-437800-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 11:42:33.587931    5624 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-437800-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 11:42:33.588180    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:33.588243    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-437800-m02 -DynamicMemoryEnabled $false
	I0429 11:42:35.863281    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:35.863281    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:35.864143    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-437800-m02 -Count 2
	I0429 11:42:38.029235    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:38.029235    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:38.030354    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-437800-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\boot2docker.iso'
	I0429 11:42:40.576166    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:40.576166    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:40.576166    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-437800-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\disk.vhd'
	I0429 11:42:43.237627    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:43.237721    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:43.237721    5624 main.go:141] libmachine: Starting VM...
	I0429 11:42:43.237721    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-437800-m02
	I0429 11:42:46.320403    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:46.320721    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:46.320721    5624 main.go:141] libmachine: Waiting for host to start...
	I0429 11:42:46.320721    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:42:48.616367    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:42:48.616367    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:48.616667    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:42:51.148176    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:51.148176    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:52.156103    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:42:54.330276    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:42:54.330438    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:54.330438    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:42:56.840715    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:56.840715    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:57.842094    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:42:59.987946    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:42:59.987946    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:59.987946    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:02.450214    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:43:02.450214    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:03.454148    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:05.609211    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:05.609655    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:05.609655    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:08.106934    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:43:08.107973    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:09.109237    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:11.297183    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:11.297183    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:11.297183    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:13.913530    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:13.913530    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:13.913530    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:16.061065    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:16.061065    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:16.061065    5624 machine.go:94] provisionDockerMachine start ...
	I0429 11:43:16.061188    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:18.253299    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:18.253299    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:18.254215    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:20.771624    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:20.771624    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:20.778900    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:43:20.779400    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:43:20.779400    5624 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 11:43:20.921867    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 11:43:20.921999    5624 buildroot.go:166] provisioning hostname "ha-437800-m02"
	I0429 11:43:20.921999    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:23.029510    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:23.029510    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:23.030184    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:25.562979    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:25.562979    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:25.570463    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:43:25.570643    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:43:25.570643    5624 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-437800-m02 && echo "ha-437800-m02" | sudo tee /etc/hostname
	I0429 11:43:25.742794    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-437800-m02
	
	I0429 11:43:25.742794    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:27.866951    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:27.866951    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:27.867247    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:30.389636    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:30.390213    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:30.394911    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:43:30.395588    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:43:30.395588    5624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-437800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-437800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-437800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:43:30.546695    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:43:30.546695    5624 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 11:43:30.546695    5624 buildroot.go:174] setting up certificates
	I0429 11:43:30.546695    5624 provision.go:84] configureAuth start
	I0429 11:43:30.546695    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:32.614621    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:32.615259    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:32.615329    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:35.143643    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:35.143643    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:35.143902    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:37.253926    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:37.254038    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:37.254038    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:39.800603    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:39.800603    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:39.800603    5624 provision.go:143] copyHostCerts
	I0429 11:43:39.800859    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 11:43:39.801095    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 11:43:39.801095    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 11:43:39.801095    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 11:43:39.802602    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 11:43:39.802836    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 11:43:39.802836    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 11:43:39.802836    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 11:43:39.804258    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 11:43:39.804561    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 11:43:39.804645    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 11:43:39.805079    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 11:43:39.806214    5624 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-437800-m02 san=[127.0.0.1 172.26.185.80 ha-437800-m02 localhost minikube]
	I0429 11:43:40.135861    5624 provision.go:177] copyRemoteCerts
	I0429 11:43:40.149763    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:43:40.150299    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:42.273457    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:42.273457    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:42.273457    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:44.825619    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:44.826026    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:44.826420    5624 sshutil.go:53] new ssh client: &{IP:172.26.185.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\id_rsa Username:docker}
	I0429 11:43:44.939885    5624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7900845s)
	I0429 11:43:44.939885    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 11:43:44.939885    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 11:43:44.997527    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 11:43:44.997970    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 11:43:45.045774    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 11:43:45.045774    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 11:43:45.094941    5624 provision.go:87] duration metric: took 14.5481323s to configureAuth
	I0429 11:43:45.094997    5624 buildroot.go:189] setting minikube options for container-runtime
	I0429 11:43:45.095168    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:43:45.095168    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:47.163278    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:47.163278    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:47.163278    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:49.701385    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:49.701683    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:49.707476    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:43:49.708202    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:43:49.708202    5624 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 11:43:49.844576    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 11:43:49.844576    5624 buildroot.go:70] root file system type: tmpfs
	I0429 11:43:49.844576    5624 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 11:43:49.844576    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:51.951752    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:51.951789    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:51.951910    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:54.462935    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:54.463754    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:54.469918    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:43:54.469918    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:43:54.470510    5624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.26.176.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 11:43:54.641714    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.26.176.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 11:43:54.641714    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:56.735635    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:56.736177    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:56.736234    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:59.257410    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:59.257410    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:59.263271    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:43:59.263271    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:43:59.263800    5624 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 11:44:01.507236    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 11:44:01.509493    5624 machine.go:97] duration metric: took 45.448074s to provisionDockerMachine
	I0429 11:44:01.509586    5624 client.go:171] duration metric: took 1m55.2365581s to LocalClient.Create
	I0429 11:44:01.509586    5624 start.go:167] duration metric: took 1m55.2366505s to libmachine.API.Create "ha-437800"
	I0429 11:44:01.509586    5624 start.go:293] postStartSetup for "ha-437800-m02" (driver="hyperv")
	I0429 11:44:01.509715    5624 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:44:01.524277    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:44:01.524277    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:03.639196    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:03.639196    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:03.639396    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:06.176738    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:06.177785    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:06.178325    5624 sshutil.go:53] new ssh client: &{IP:172.26.185.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\id_rsa Username:docker}
	I0429 11:44:06.294152    5624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7698374s)
	I0429 11:44:06.307153    5624 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:44:06.315238    5624 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 11:44:06.315352    5624 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 11:44:06.315688    5624 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 11:44:06.316797    5624 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 11:44:06.316797    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 11:44:06.329529    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 11:44:06.350362    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 11:44:06.399519    5624 start.go:296] duration metric: took 4.8898947s for postStartSetup
	I0429 11:44:06.402549    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:08.486719    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:08.486719    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:08.487680    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:11.032622    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:11.032622    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:11.032622    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:44:11.035892    5624 start.go:128] duration metric: took 2m4.7658761s to createHost
	I0429 11:44:11.036525    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:13.163944    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:13.164021    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:13.164133    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:15.739025    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:15.739422    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:15.746073    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:44:15.746814    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:44:15.746814    5624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 11:44:15.878082    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714391055.869843780
	
	I0429 11:44:15.878082    5624 fix.go:216] guest clock: 1714391055.869843780
	I0429 11:44:15.878082    5624 fix.go:229] Guest: 2024-04-29 11:44:15.86984378 +0000 UTC Remote: 2024-04-29 11:44:11.036488 +0000 UTC m=+334.725584301 (delta=4.83335578s)
	I0429 11:44:15.878202    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:17.925767    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:17.925815    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:17.925815    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:20.499212    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:20.499508    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:20.506087    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:44:20.506290    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:44:20.506290    5624 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714391055
	I0429 11:44:20.664519    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 11:44:15 UTC 2024
	
	I0429 11:44:20.664519    5624 fix.go:236] clock set: Mon Apr 29 11:44:15 UTC 2024
	 (err=<nil>)
	I0429 11:44:20.664519    5624 start.go:83] releasing machines lock for "ha-437800-m02", held for 2m14.3954344s
	I0429 11:44:20.664824    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:22.757042    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:22.757704    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:22.757844    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:25.261345    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:25.261635    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:25.265372    5624 out.go:177] * Found network options:
	I0429 11:44:25.268684    5624 out.go:177]   - NO_PROXY=172.26.176.3
	W0429 11:44:25.270733    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 11:44:25.273092    5624 out.go:177]   - NO_PROXY=172.26.176.3
	W0429 11:44:25.275248    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 11:44:25.276683    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 11:44:25.279331    5624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:44:25.279331    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:25.297766    5624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 11:44:25.297766    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:27.428995    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:27.428995    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:27.428995    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:27.441094    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:27.441094    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:27.441094    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:30.063522    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:30.063522    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:30.064673    5624 sshutil.go:53] new ssh client: &{IP:172.26.185.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\id_rsa Username:docker}
	I0429 11:44:30.087132    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:30.087132    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:30.087556    5624 sshutil.go:53] new ssh client: &{IP:172.26.185.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\id_rsa Username:docker}
	I0429 11:44:30.252212    5624 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9544077s)
	I0429 11:44:30.252291    5624 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9729216s)
	W0429 11:44:30.252389    5624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 11:44:30.265482    5624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:44:30.301604    5624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 11:44:30.301604    5624 start.go:494] detecting cgroup driver to use...
	I0429 11:44:30.301604    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:44:30.355595    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 11:44:30.387680    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 11:44:30.408592    5624 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 11:44:30.420658    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 11:44:30.454073    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:44:30.488741    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 11:44:30.523956    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:44:30.552964    5624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:44:30.589420    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 11:44:30.626656    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 11:44:30.660759    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 11:44:30.693221    5624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:44:30.725222    5624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:44:30.758219    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:44:30.978828    5624 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 11:44:31.012336    5624 start.go:494] detecting cgroup driver to use...
	I0429 11:44:31.024921    5624 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 11:44:31.063927    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:44:31.098911    5624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:44:31.151915    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:44:31.188433    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:44:31.225593    5624 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 11:44:31.290549    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:44:31.314138    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:44:31.364806    5624 ssh_runner.go:195] Run: which cri-dockerd
	I0429 11:44:31.384313    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 11:44:31.406433    5624 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 11:44:31.457094    5624 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 11:44:31.681193    5624 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 11:44:31.879845    5624 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 11:44:31.879996    5624 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 11:44:31.926619    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:44:32.143059    5624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 11:44:34.687305    5624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5442264s)
	I0429 11:44:34.701337    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 11:44:34.740720    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 11:44:34.780248    5624 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 11:44:34.993950    5624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 11:44:35.210559    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:44:35.428939    5624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 11:44:35.475721    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 11:44:35.516612    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:44:35.747702    5624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 11:44:35.875483    5624 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 11:44:35.889971    5624 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 11:44:35.900252    5624 start.go:562] Will wait 60s for crictl version
	I0429 11:44:35.913594    5624 ssh_runner.go:195] Run: which crictl
	I0429 11:44:35.932666    5624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 11:44:35.995475    5624 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 11:44:36.006070    5624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 11:44:36.056079    5624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 11:44:36.095205    5624 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 11:44:36.098429    5624 out.go:177]   - env NO_PROXY=172.26.176.3
	I0429 11:44:36.103417    5624 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 11:44:36.107417    5624 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 11:44:36.107417    5624 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 11:44:36.107417    5624 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 11:44:36.107417    5624 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:da:8e:53 Flags:up|broadcast|multicast|running}
	I0429 11:44:36.110416    5624 ip.go:210] interface addr: fe80::e4d4:6d70:21fb:68f3/64
	I0429 11:44:36.110416    5624 ip.go:210] interface addr: 172.26.176.1/20
	I0429 11:44:36.123771    5624 ssh_runner.go:195] Run: grep 172.26.176.1	host.minikube.internal$ /etc/hosts
	I0429 11:44:36.131766    5624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:44:36.156153    5624 mustload.go:65] Loading cluster: ha-437800
	I0429 11:44:36.156997    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:44:36.157643    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:44:38.247518    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:38.247518    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:38.247518    5624 host.go:66] Checking if "ha-437800" exists ...
	I0429 11:44:38.248066    5624 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800 for IP: 172.26.185.80
	I0429 11:44:38.248066    5624 certs.go:194] generating shared ca certs ...
	I0429 11:44:38.248066    5624 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:44:38.249049    5624 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 11:44:38.249476    5624 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 11:44:38.249695    5624 certs.go:256] generating profile certs ...
	I0429 11:44:38.250369    5624 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.key
	I0429 11:44:38.250485    5624 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.3bc4921e
	I0429 11:44:38.250623    5624 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.3bc4921e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.26.176.3 172.26.185.80 172.26.191.254]
	I0429 11:44:38.620644    5624 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.3bc4921e ...
	I0429 11:44:38.620644    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.3bc4921e: {Name:mk580a605ceda2e337454db64c47dc0599057a0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:44:38.621643    5624 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.3bc4921e ...
	I0429 11:44:38.621643    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.3bc4921e: {Name:mke1c5e386d821804eb4df2dee5e5f8ef6eebb15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:44:38.622935    5624 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.3bc4921e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt
	I0429 11:44:38.635991    5624 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.3bc4921e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key
	I0429 11:44:38.636928    5624 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key
	I0429 11:44:38.636928    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 11:44:38.637487    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 11:44:38.637772    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 11:44:38.638017    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 11:44:38.638085    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 11:44:38.638376    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 11:44:38.638585    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 11:44:38.638585    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 11:44:38.639113    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem (1338 bytes)
	W0429 11:44:38.639113    5624 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496_empty.pem, impossibly tiny 0 bytes
	I0429 11:44:38.639113    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0429 11:44:38.639113    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 11:44:38.639113    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 11:44:38.640516    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 11:44:38.640800    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem (1708 bytes)
	I0429 11:44:38.641354    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem -> /usr/share/ca-certificates/8496.pem
	I0429 11:44:38.641474    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /usr/share/ca-certificates/84962.pem
	I0429 11:44:38.641474    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:44:38.642038    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:44:40.736761    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:40.736761    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:40.737366    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:43.329380    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:44:43.329380    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:43.330824    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:44:43.444162    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 11:44:43.460651    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 11:44:43.501769    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 11:44:43.509321    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0429 11:44:43.546215    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 11:44:43.554672    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 11:44:43.593367    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 11:44:43.602244    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0429 11:44:43.641506    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 11:44:43.648749    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 11:44:43.697348    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 11:44:43.704306    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0429 11:44:43.727023    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 11:44:43.780702    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 11:44:43.829749    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 11:44:43.877586    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 11:44:43.926185    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0429 11:44:43.975126    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 11:44:44.022408    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 11:44:44.074790    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 11:44:44.123858    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem --> /usr/share/ca-certificates/8496.pem (1338 bytes)
	I0429 11:44:44.173847    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /usr/share/ca-certificates/84962.pem (1708 bytes)
	I0429 11:44:44.221051    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 11:44:44.269751    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 11:44:44.302102    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0429 11:44:44.335363    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 11:44:44.371626    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0429 11:44:44.407921    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 11:44:44.443521    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0429 11:44:44.479115    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 11:44:44.530351    5624 ssh_runner.go:195] Run: openssl version
	I0429 11:44:44.554167    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8496.pem && ln -fs /usr/share/ca-certificates/8496.pem /etc/ssl/certs/8496.pem"
	I0429 11:44:44.588089    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8496.pem
	I0429 11:44:44.595790    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 11:44:44.609346    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8496.pem
	I0429 11:44:44.632204    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8496.pem /etc/ssl/certs/51391683.0"
	I0429 11:44:44.671345    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84962.pem && ln -fs /usr/share/ca-certificates/84962.pem /etc/ssl/certs/84962.pem"
	I0429 11:44:44.706448    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84962.pem
	I0429 11:44:44.714278    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 11:44:44.727262    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84962.pem
	I0429 11:44:44.748436    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84962.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 11:44:44.785175    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 11:44:44.818806    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:44:44.825505    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:44:44.839640    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:44:44.864528    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 11:44:44.899290    5624 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 11:44:44.906248    5624 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 11:44:44.906248    5624 kubeadm.go:928] updating node {m02 172.26.185.80 8443 v1.30.0 docker true true} ...
	I0429 11:44:44.906827    5624 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-437800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.185.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 11:44:44.906956    5624 kube-vip.go:111] generating kube-vip config ...
	I0429 11:44:44.919174    5624 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 11:44:44.944791    5624 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 11:44:44.945325    5624 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.26.191.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 11:44:44.958547    5624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 11:44:44.977181    5624 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 11:44:44.989210    5624 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 11:44:45.010958    5624 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0429 11:44:45.011498    5624 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0429 11:44:45.011498    5624 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0429 11:44:46.088829    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 11:44:46.100862    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 11:44:46.112840    5624 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 11:44:46.112840    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 11:44:47.260346    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 11:44:47.276037    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 11:44:47.284750    5624 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 11:44:47.284750    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 11:44:48.910747    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:44:48.937676    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 11:44:48.950297    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 11:44:48.958051    5624 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 11:44:48.958051    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 11:44:49.573575    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 11:44:49.592202    5624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0429 11:44:49.630020    5624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 11:44:49.670949    5624 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 11:44:49.722997    5624 ssh_runner.go:195] Run: grep 172.26.191.254	control-plane.minikube.internal$ /etc/hosts
	I0429 11:44:49.730921    5624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.191.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:44:49.774728    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:44:50.009298    5624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:44:50.048395    5624 host.go:66] Checking if "ha-437800" exists ...
	I0429 11:44:50.048395    5624 start.go:316] joinCluster: &{Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.185.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:44:50.049413    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 11:44:50.049413    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:44:52.155279    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:52.155279    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:52.156005    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:54.647307    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:44:54.648258    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:54.649035    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:44:54.892042    5624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8425917s)
	I0429 11:44:54.892042    5624 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.26.185.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:44:54.892042    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0kw0b2.qry6qq722q05dz2j --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-437800-m02 --control-plane --apiserver-advertise-address=172.26.185.80 --apiserver-bind-port=8443"
	I0429 11:45:43.171446    5624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0kw0b2.qry6qq722q05dz2j --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-437800-m02 --control-plane --apiserver-advertise-address=172.26.185.80 --apiserver-bind-port=8443": (48.2789691s)
	I0429 11:45:43.171562    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 11:45:44.088437    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-437800-m02 minikube.k8s.io/updated_at=2024_04_29T11_45_44_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d minikube.k8s.io/name=ha-437800 minikube.k8s.io/primary=false
	I0429 11:45:44.273766    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-437800-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 11:45:44.432630    5624 start.go:318] duration metric: took 54.3838103s to joinCluster
	I0429 11:45:44.432630    5624 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.26.185.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:45:44.435930    5624 out.go:177] * Verifying Kubernetes components...
	I0429 11:45:44.433503    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:45:44.452994    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:45:44.866764    5624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:45:44.899509    5624 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:45:44.900401    5624 kapi.go:59] client config for ha-437800: &rest.Config{Host:"https://172.26.191.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-437800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-437800\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 11:45:44.900497    5624 kubeadm.go:477] Overriding stale ClientConfig host https://172.26.191.254:8443 with https://172.26.176.3:8443
	I0429 11:45:44.901420    5624 node_ready.go:35] waiting up to 6m0s for node "ha-437800-m02" to be "Ready" ...
	I0429 11:45:44.901635    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:44.901635    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:44.901635    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:44.901690    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:44.920349    5624 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 11:45:45.416197    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:45.416197    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:45.416197    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:45.416197    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:45.590170    5624 round_trippers.go:574] Response Status: 200 OK in 173 milliseconds
	I0429 11:45:45.904939    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:45.905073    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:45.905073    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:45.905073    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:45.910522    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:46.410919    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:46.411022    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:46.411022    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:46.411022    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:46.428796    5624 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0429 11:45:46.901970    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:46.901970    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:46.901970    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:46.902276    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:46.915963    5624 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 11:45:46.916193    5624 node_ready.go:53] node "ha-437800-m02" has status "Ready":"False"
	I0429 11:45:47.403594    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:47.403594    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:47.403594    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:47.403594    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:47.407449    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:47.910257    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:47.910295    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:47.910324    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:47.910324    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:47.915937    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:48.415959    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:48.415959    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:48.415959    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:48.415959    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:48.420593    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:48.905257    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:48.905257    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:48.905257    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:48.905257    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:48.911274    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:49.414881    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:49.414881    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:49.414881    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:49.414881    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:49.429464    5624 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 11:45:49.430503    5624 node_ready.go:53] node "ha-437800-m02" has status "Ready":"False"
	I0429 11:45:49.905318    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:49.905370    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:49.905370    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:49.905370    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:50.050527    5624 round_trippers.go:574] Response Status: 200 OK in 145 milliseconds
	I0429 11:45:50.411430    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:50.411430    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:50.411430    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:50.411430    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:50.418059    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:50.913967    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:50.914243    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:50.914243    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:50.914243    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:50.919405    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:51.416973    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:51.416973    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.417263    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.417263    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.421896    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:51.423569    5624 node_ready.go:49] node "ha-437800-m02" has status "Ready":"True"
	I0429 11:45:51.423599    5624 node_ready.go:38] duration metric: took 6.5221281s for node "ha-437800-m02" to be "Ready" ...
	I0429 11:45:51.423666    5624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:45:51.423875    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:45:51.423875    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.423875    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.423875    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.435557    5624 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 11:45:51.445803    5624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vvf4j" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.445803    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vvf4j
	I0429 11:45:51.445803    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.445803    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.445803    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.456515    5624 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 11:45:51.459308    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:51.459308    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.459308    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.459308    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.472673    5624 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 11:45:51.473734    5624 pod_ready.go:92] pod "coredns-7db6d8ff4d-vvf4j" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:51.473734    5624 pod_ready.go:81] duration metric: took 27.931ms for pod "coredns-7db6d8ff4d-vvf4j" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.473734    5624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxvcx" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.473734    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zxvcx
	I0429 11:45:51.473734    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.473734    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.473734    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.484311    5624 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 11:45:51.485235    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:51.485286    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.485286    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.485286    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.491626    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:51.491937    5624 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxvcx" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:51.491937    5624 pod_ready.go:81] duration metric: took 18.2035ms for pod "coredns-7db6d8ff4d-zxvcx" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.491937    5624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.491937    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800
	I0429 11:45:51.491937    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.491937    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.491937    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.501793    5624 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 11:45:51.505535    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:51.505574    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.505574    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.505574    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.521428    5624 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0429 11:45:51.522509    5624 pod_ready.go:92] pod "etcd-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:51.522509    5624 pod_ready.go:81] duration metric: took 30.5716ms for pod "etcd-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.522561    5624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.522731    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:51.522770    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.522770    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.522770    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.529057    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:51.529974    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:51.529974    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.529974    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.529974    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.534174    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:52.028329    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:52.028329    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:52.028433    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:52.028433    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:52.032596    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:52.034395    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:52.034523    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:52.034523    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:52.034523    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:52.038967    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:52.530191    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:52.530191    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:52.530191    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:52.530191    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:52.536169    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:52.536982    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:52.536982    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:52.536982    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:52.536982    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:52.541582    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:53.027884    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:53.027884    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:53.027884    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:53.027884    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:53.034466    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:53.036050    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:53.036112    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:53.036112    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:53.036112    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:53.039968    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:53.538452    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:53.538535    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:53.538535    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:53.538535    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:53.543289    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:53.544579    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:53.544579    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:53.544579    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:53.544579    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:53.548666    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:53.549891    5624 pod_ready.go:102] pod "etcd-ha-437800-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 11:45:54.037990    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:54.037990    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:54.038167    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:54.038167    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:54.044978    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:54.047478    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:54.047478    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:54.047478    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:54.047478    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:54.053596    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:54.529895    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:54.529895    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:54.529895    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:54.530008    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:54.536761    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:54.537046    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:54.537670    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:54.537670    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:54.537670    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:54.542507    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:55.035585    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:55.035585    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:55.035585    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:55.035585    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:55.041042    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:55.041918    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:55.041918    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:55.041918    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:55.041918    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:55.053830    5624 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 11:45:55.527254    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:55.527315    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:55.527390    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:55.527390    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:55.532068    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:55.534222    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:55.534222    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:55.534222    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:55.534222    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:55.538460    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:56.033359    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:56.033359    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:56.033359    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:56.033359    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:56.038408    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:56.040284    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:56.040284    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:56.040284    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:56.040284    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:56.045366    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:56.045579    5624 pod_ready.go:102] pod "etcd-ha-437800-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 11:45:56.523859    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:56.523859    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:56.523859    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:56.523859    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:56.528910    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:56.531025    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:56.531097    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:56.531097    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:56.531173    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:56.535342    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:57.028554    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:57.028554    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:57.028648    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:57.028648    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:57.033992    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:57.035442    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:57.035520    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:57.035520    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:57.035520    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:57.040789    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:57.535067    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:57.535067    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:57.535067    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:57.535067    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:57.539696    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:57.540783    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:57.540783    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:57.540783    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:57.540783    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:57.546370    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:58.029249    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:58.029249    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:58.029310    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:58.029310    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:58.034127    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:58.035151    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:58.035214    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:58.035214    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:58.035214    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:58.039772    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:58.536476    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:58.536476    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:58.536476    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:58.536476    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:58.542113    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:58.544014    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:58.544014    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:58.544014    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:58.544014    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:58.549430    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:58.550005    5624 pod_ready.go:102] pod "etcd-ha-437800-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 11:45:59.023811    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:59.023811    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.023811    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.023898    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.029858    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:59.031195    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:59.031252    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.031252    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.031252    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.035725    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:59.531346    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:59.531346    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.531346    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.531346    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.535953    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:59.537652    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:59.538181    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.538181    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.538181    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.544846    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:59.545434    5624 pod_ready.go:92] pod "etcd-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:59.545434    5624 pod_ready.go:81] duration metric: took 8.0227797s for pod "etcd-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.545506    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.545619    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800
	I0429 11:45:59.545619    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.545696    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.545696    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.556236    5624 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 11:45:59.557353    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:59.557353    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.557353    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.557353    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.561362    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:59.562601    5624 pod_ready.go:92] pod "kube-apiserver-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:59.562689    5624 pod_ready.go:81] duration metric: took 17.1832ms for pod "kube-apiserver-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.562710    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.562839    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800-m02
	I0429 11:45:59.562916    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.562916    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.562916    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.567498    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:59.568607    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:59.569174    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.569174    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.569174    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.572194    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:59.573884    5624 pod_ready.go:92] pod "kube-apiserver-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:59.573884    5624 pod_ready.go:81] duration metric: took 11.1745ms for pod "kube-apiserver-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.573988    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.574101    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800
	I0429 11:45:59.574101    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.574167    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.574167    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.578958    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:59.579763    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:59.579763    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.579763    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.579763    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.583339    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:59.584380    5624 pod_ready.go:92] pod "kube-controller-manager-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:59.584380    5624 pod_ready.go:81] duration metric: took 10.3914ms for pod "kube-controller-manager-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.584380    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.584380    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800-m02
	I0429 11:45:59.584380    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.584380    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.584380    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.588354    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:59.589651    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:59.589651    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.589651    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.589651    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.593335    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:59.594603    5624 pod_ready.go:92] pod "kube-controller-manager-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:59.594703    5624 pod_ready.go:81] duration metric: took 10.2231ms for pod "kube-controller-manager-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.594703    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hvzz9" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.735558    5624 request.go:629] Waited for 140.4274ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hvzz9
	I0429 11:45:59.735812    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hvzz9
	I0429 11:45:59.735812    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.735812    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.735812    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.742144    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:59.941350    5624 request.go:629] Waited for 197.9448ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:59.941740    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:59.941740    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.941740    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.941740    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.947450    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:59.948169    5624 pod_ready.go:92] pod "kube-proxy-hvzz9" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:59.948169    5624 pod_ready.go:81] duration metric: took 353.4633ms for pod "kube-proxy-hvzz9" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.948169    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pzfjr" in "kube-system" namespace to be "Ready" ...
	I0429 11:46:00.131790    5624 request.go:629] Waited for 183.515ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzfjr
	I0429 11:46:00.132499    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzfjr
	I0429 11:46:00.132499    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:00.132499    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:00.132499    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:00.137757    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:46:00.335071    5624 request.go:629] Waited for 194.6782ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:46:00.335175    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:46:00.335175    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:00.335237    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:00.335237    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:00.340709    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:46:00.341835    5624 pod_ready.go:92] pod "kube-proxy-pzfjr" in "kube-system" namespace has status "Ready":"True"
	I0429 11:46:00.341835    5624 pod_ready.go:81] duration metric: took 393.5582ms for pod "kube-proxy-pzfjr" in "kube-system" namespace to be "Ready" ...
	I0429 11:46:00.341835    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:46:00.535429    5624 request.go:629] Waited for 193.5922ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800
	I0429 11:46:00.535429    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800
	I0429 11:46:00.535429    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:00.535429    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:00.535429    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:00.541363    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:46:00.740536    5624 request.go:629] Waited for 197.9892ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:46:00.740791    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:46:00.740791    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:00.740791    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:00.740867    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:00.746771    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:46:00.747631    5624 pod_ready.go:92] pod "kube-scheduler-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:46:00.747728    5624 pod_ready.go:81] duration metric: took 405.7921ms for pod "kube-scheduler-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:46:00.747728    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:46:00.946125    5624 request.go:629] Waited for 198.0918ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800-m02
	I0429 11:46:00.946125    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800-m02
	I0429 11:46:00.946401    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:00.946401    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:00.946401    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:00.952201    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:46:01.137294    5624 request.go:629] Waited for 182.7172ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:46:01.137559    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:46:01.137559    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:01.137559    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:01.137559    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:01.143599    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:46:01.145595    5624 pod_ready.go:92] pod "kube-scheduler-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:46:01.145595    5624 pod_ready.go:81] duration metric: took 397.8638ms for pod "kube-scheduler-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:46:01.145595    5624 pod_ready.go:38] duration metric: took 9.7218535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:46:01.145595    5624 api_server.go:52] waiting for apiserver process to appear ...
	I0429 11:46:01.158501    5624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 11:46:01.196449    5624 api_server.go:72] duration metric: took 16.7636888s to wait for apiserver process to appear ...
	I0429 11:46:01.196517    5624 api_server.go:88] waiting for apiserver healthz status ...
	I0429 11:46:01.196580    5624 api_server.go:253] Checking apiserver healthz at https://172.26.176.3:8443/healthz ...
	I0429 11:46:01.204092    5624 api_server.go:279] https://172.26.176.3:8443/healthz returned 200:
	ok
	I0429 11:46:01.204830    5624 round_trippers.go:463] GET https://172.26.176.3:8443/version
	I0429 11:46:01.204863    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:01.204863    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:01.204863    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:01.205705    5624 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0429 11:46:01.206647    5624 api_server.go:141] control plane version: v1.30.0
	I0429 11:46:01.206647    5624 api_server.go:131] duration metric: took 10.1299ms to wait for apiserver health ...
	I0429 11:46:01.206647    5624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 11:46:01.341614    5624 request.go:629] Waited for 134.9143ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:46:01.341773    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:46:01.341773    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:01.341820    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:01.341820    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:01.351197    5624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 11:46:01.358177    5624 system_pods.go:59] 17 kube-system pods found
	I0429 11:46:01.358177    5624 system_pods.go:61] "coredns-7db6d8ff4d-vvf4j" [cc00761a-60fb-4c04-9502-c0aa8b88e45a] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "coredns-7db6d8ff4d-zxvcx" [7f8c7504-7c8b-4d15-bcb0-63320257debc] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "etcd-ha-437800" [4c2ad87e-0a97-4414-bc1c-30c4d5d5b58f] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "etcd-ha-437800-m02" [9bd90d2f-eaff-4f49-acac-669292904ac9] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kindnet-qg7qh" [cba63805-bae0-48e9-93b5-7ed38b14846f] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kindnet-qgbzf" [8e86dd3b-eb48-4bd5-a3f8-38f53d7c2bd8] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-apiserver-ha-437800" [21394aa6-39d0-40b0-9335-e618e86ccbd5] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-apiserver-ha-437800-m02" [167ef62e-bb21-4605-b821-f469de4aedf5] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-controller-manager-ha-437800" [5233d18d-4b1a-4846-84c5-08043f05cd40] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-controller-manager-ha-437800-m02" [881ec6cd-768c-46f0-b10f-56f2a33172f3] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-proxy-hvzz9" [ea3045a9-bcea-4757-80a4-70361f030a6b] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-proxy-pzfjr" [69ec7440-fd5b-4cee-8c37-e4a610b48570] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-scheduler-ha-437800" [db1d725b-2fe3-4ff5-960d-48498bd58597] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-scheduler-ha-437800-m02" [97b2e475-ff85-4601-8ded-f8e759fee82f] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-vip-ha-437800" [b777794b-764c-42d5-8a96-2463488c0738] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-vip-ha-437800-m02" [ed988926-35c5-4fb8-9e43-f50960fa81aa] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "storage-provisioner" [f3b60672-2de9-4a05-86cc-b3b7ed019410] Running
	I0429 11:46:01.358177    5624 system_pods.go:74] duration metric: took 151.5284ms to wait for pod list to return data ...
	I0429 11:46:01.358177    5624 default_sa.go:34] waiting for default service account to be created ...
	I0429 11:46:01.542386    5624 request.go:629] Waited for 184.2078ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/default/serviceaccounts
	I0429 11:46:01.542721    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/default/serviceaccounts
	I0429 11:46:01.542721    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:01.542721    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:01.542721    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:01.556688    5624 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 11:46:01.557360    5624 default_sa.go:45] found service account: "default"
	I0429 11:46:01.557360    5624 default_sa.go:55] duration metric: took 199.1818ms for default service account to be created ...
	I0429 11:46:01.557360    5624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 11:46:01.745662    5624 request.go:629] Waited for 187.9148ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:46:01.745874    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:46:01.745874    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:01.745874    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:01.745874    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:01.755714    5624 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 11:46:01.766837    5624 system_pods.go:86] 17 kube-system pods found
	I0429 11:46:01.766837    5624 system_pods.go:89] "coredns-7db6d8ff4d-vvf4j" [cc00761a-60fb-4c04-9502-c0aa8b88e45a] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "coredns-7db6d8ff4d-zxvcx" [7f8c7504-7c8b-4d15-bcb0-63320257debc] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "etcd-ha-437800" [4c2ad87e-0a97-4414-bc1c-30c4d5d5b58f] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "etcd-ha-437800-m02" [9bd90d2f-eaff-4f49-acac-669292904ac9] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kindnet-qg7qh" [cba63805-bae0-48e9-93b5-7ed38b14846f] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kindnet-qgbzf" [8e86dd3b-eb48-4bd5-a3f8-38f53d7c2bd8] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-apiserver-ha-437800" [21394aa6-39d0-40b0-9335-e618e86ccbd5] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-apiserver-ha-437800-m02" [167ef62e-bb21-4605-b821-f469de4aedf5] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-controller-manager-ha-437800" [5233d18d-4b1a-4846-84c5-08043f05cd40] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-controller-manager-ha-437800-m02" [881ec6cd-768c-46f0-b10f-56f2a33172f3] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-proxy-hvzz9" [ea3045a9-bcea-4757-80a4-70361f030a6b] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-proxy-pzfjr" [69ec7440-fd5b-4cee-8c37-e4a610b48570] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-scheduler-ha-437800" [db1d725b-2fe3-4ff5-960d-48498bd58597] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-scheduler-ha-437800-m02" [97b2e475-ff85-4601-8ded-f8e759fee82f] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-vip-ha-437800" [b777794b-764c-42d5-8a96-2463488c0738] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-vip-ha-437800-m02" [ed988926-35c5-4fb8-9e43-f50960fa81aa] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "storage-provisioner" [f3b60672-2de9-4a05-86cc-b3b7ed019410] Running
	I0429 11:46:01.766837    5624 system_pods.go:126] duration metric: took 209.4756ms to wait for k8s-apps to be running ...
	I0429 11:46:01.766837    5624 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 11:46:01.780207    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:46:01.808832    5624 system_svc.go:56] duration metric: took 41.9945ms WaitForService to wait for kubelet
	I0429 11:46:01.808925    5624 kubeadm.go:576] duration metric: took 17.3761597s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 11:46:01.808995    5624 node_conditions.go:102] verifying NodePressure condition ...
	I0429 11:46:01.934720    5624 request.go:629] Waited for 125.4108ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes
	I0429 11:46:01.934902    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes
	I0429 11:46:01.934902    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:01.934902    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:01.934995    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:01.943451    5624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 11:46:01.945023    5624 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 11:46:01.945023    5624 node_conditions.go:123] node cpu capacity is 2
	I0429 11:46:01.945023    5624 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 11:46:01.945023    5624 node_conditions.go:123] node cpu capacity is 2
	I0429 11:46:01.945023    5624 node_conditions.go:105] duration metric: took 136.0269ms to run NodePressure ...
	I0429 11:46:01.945023    5624 start.go:240] waiting for startup goroutines ...
	I0429 11:46:01.945023    5624 start.go:254] writing updated cluster config ...
	I0429 11:46:01.949229    5624 out.go:177] 
	I0429 11:46:01.964913    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:46:01.964913    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:46:01.969831    5624 out.go:177] * Starting "ha-437800-m03" control-plane node in "ha-437800" cluster
	I0429 11:46:01.973470    5624 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 11:46:01.973470    5624 cache.go:56] Caching tarball of preloaded images
	I0429 11:46:01.973470    5624 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 11:46:01.973996    5624 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 11:46:01.974268    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:46:01.983093    5624 start.go:360] acquireMachinesLock for ha-437800-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 11:46:01.983093    5624 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-437800-m03"
	I0429 11:46:01.983093    5624 start.go:93] Provisioning new machine with config: &{Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.185.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:46:01.983649    5624 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0429 11:46:01.986594    5624 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 11:46:01.986808    5624 start.go:159] libmachine.API.Create for "ha-437800" (driver="hyperv")
	I0429 11:46:01.986808    5624 client.go:168] LocalClient.Create starting
	I0429 11:46:01.986808    5624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 11:46:01.986808    5624 main.go:141] libmachine: Decoding PEM data...
	I0429 11:46:01.986808    5624 main.go:141] libmachine: Parsing certificate...
	I0429 11:46:01.986808    5624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 11:46:01.988037    5624 main.go:141] libmachine: Decoding PEM data...
	I0429 11:46:01.988181    5624 main.go:141] libmachine: Parsing certificate...
	I0429 11:46:01.988244    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 11:46:03.935181    5624 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 11:46:03.935181    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:03.935181    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 11:46:05.768353    5624 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 11:46:05.768982    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:05.769070    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 11:46:07.402106    5624 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 11:46:07.402106    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:07.402825    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 11:46:11.262097    5624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 11:46:11.262170    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:11.264343    5624 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 11:46:11.756290    5624 main.go:141] libmachine: Creating SSH key...
	I0429 11:46:11.880364    5624 main.go:141] libmachine: Creating VM...
	I0429 11:46:11.880364    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 11:46:14.917836    5624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 11:46:14.917836    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:14.918670    5624 main.go:141] libmachine: Using switch "Default Switch"
	I0429 11:46:14.918843    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 11:46:16.781713    5624 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 11:46:16.782623    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:16.782623    5624 main.go:141] libmachine: Creating VHD
	I0429 11:46:16.782623    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 11:46:20.542223    5624 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CF6A7843-27CB-4BA6-9EA2-8DFB317FB644
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 11:46:20.542624    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:20.542624    5624 main.go:141] libmachine: Writing magic tar header
	I0429 11:46:20.542624    5624 main.go:141] libmachine: Writing SSH key tar header
	I0429 11:46:20.552938    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 11:46:23.795321    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:23.796308    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:23.796308    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\disk.vhd' -SizeBytes 20000MB
	I0429 11:46:26.338853    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:26.338853    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:26.338853    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-437800-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 11:46:30.104389    5624 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-437800-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 11:46:30.104478    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:30.104478    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-437800-m03 -DynamicMemoryEnabled $false
	I0429 11:46:32.323266    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:32.323266    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:32.323266    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-437800-m03 -Count 2
	I0429 11:46:34.533841    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:34.533841    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:34.533841    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-437800-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\boot2docker.iso'
	I0429 11:46:37.111647    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:37.112216    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:37.112312    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-437800-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\disk.vhd'
	I0429 11:46:39.764115    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:39.764559    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:39.764559    5624 main.go:141] libmachine: Starting VM...
	I0429 11:46:39.764559    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-437800-m03
	I0429 11:46:42.902101    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:42.902101    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:42.902101    5624 main.go:141] libmachine: Waiting for host to start...
	I0429 11:46:42.902101    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:46:45.237975    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:46:45.238553    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:45.238553    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:46:47.821560    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:47.821560    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:48.824829    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:46:51.089473    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:46:51.089473    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:51.089473    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:46:53.729210    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:53.729210    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:54.741468    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:46:56.941165    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:46:56.941235    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:56.941235    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:46:59.496536    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:59.496606    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:00.496647    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:02.729686    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:02.729781    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:02.729848    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:05.345785    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:47:05.345785    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:06.359637    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:08.583215    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:08.584072    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:08.584163    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:11.211931    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:11.212513    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:11.212713    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:13.398329    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:13.398381    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:13.398381    5624 machine.go:94] provisionDockerMachine start ...
	I0429 11:47:13.398381    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:15.651373    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:15.651373    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:15.651537    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:18.261274    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:18.261754    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:18.269219    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:47:18.281521    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:47:18.281521    5624 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 11:47:18.413340    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 11:47:18.413340    5624 buildroot.go:166] provisioning hostname "ha-437800-m03"
	I0429 11:47:18.413340    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:20.571577    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:20.572096    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:20.572297    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:23.166526    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:23.166526    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:23.173476    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:47:23.173476    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:47:23.173476    5624 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-437800-m03 && echo "ha-437800-m03" | sudo tee /etc/hostname
	I0429 11:47:23.340958    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-437800-m03
	
	I0429 11:47:23.340958    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:25.494819    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:25.494819    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:25.495491    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:28.117858    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:28.117858    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:28.124316    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:47:28.125046    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:47:28.125046    5624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-437800-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-437800-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-437800-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:47:28.270803    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:47:28.270900    5624 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 11:47:28.270968    5624 buildroot.go:174] setting up certificates
	I0429 11:47:28.271022    5624 provision.go:84] configureAuth start
	I0429 11:47:28.271022    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:30.406485    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:30.406694    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:30.406694    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:32.993151    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:32.994166    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:32.994166    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:35.127784    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:35.127878    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:35.127878    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:37.680355    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:37.680355    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:37.680355    5624 provision.go:143] copyHostCerts
	I0429 11:47:37.680355    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 11:47:37.681376    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 11:47:37.681376    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 11:47:37.681376    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 11:47:37.682373    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 11:47:37.682373    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 11:47:37.682373    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 11:47:37.683375    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 11:47:37.684372    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 11:47:37.684372    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 11:47:37.684372    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 11:47:37.684372    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 11:47:37.685377    5624 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-437800-m03 san=[127.0.0.1 172.26.177.113 ha-437800-m03 localhost minikube]
	I0429 11:47:37.858334    5624 provision.go:177] copyRemoteCerts
	I0429 11:47:37.871840    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:47:37.871840    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:40.013723    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:40.013784    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:40.013784    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:42.636118    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:42.636118    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:42.636118    5624 sshutil.go:53] new ssh client: &{IP:172.26.177.113 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\id_rsa Username:docker}
	I0429 11:47:42.751004    5624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.879075s)
	I0429 11:47:42.751058    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 11:47:42.751181    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 11:47:42.801862    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 11:47:42.801862    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 11:47:42.854096    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 11:47:42.854544    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 11:47:42.905578    5624 provision.go:87] duration metric: took 14.6343612s to configureAuth
	I0429 11:47:42.905638    5624 buildroot.go:189] setting minikube options for container-runtime
	I0429 11:47:42.906384    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:47:42.906452    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:45.069113    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:45.069113    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:45.069591    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:47.653940    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:47.653940    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:47.659475    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:47:47.660395    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:47:47.660490    5624 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 11:47:47.798799    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 11:47:47.798799    5624 buildroot.go:70] root file system type: tmpfs
	I0429 11:47:47.798799    5624 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 11:47:47.798799    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:49.922938    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:49.922938    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:49.922938    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:52.547974    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:52.548746    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:52.555517    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:47:52.556188    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:47:52.556188    5624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.26.176.3"
	Environment="NO_PROXY=172.26.176.3,172.26.185.80"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 11:47:52.730280    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.26.176.3
	Environment=NO_PROXY=172.26.176.3,172.26.185.80
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 11:47:52.730280    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:54.868090    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:54.868090    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:54.868462    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:57.517408    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:57.517408    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:57.525741    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:47:57.525873    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:47:57.525873    5624 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 11:47:59.747511    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 11:47:59.747511    5624 machine.go:97] duration metric: took 46.3487684s to provisionDockerMachine
	I0429 11:47:59.747511    5624 client.go:171] duration metric: took 1m57.7597842s to LocalClient.Create
	I0429 11:47:59.747714    5624 start.go:167] duration metric: took 1m57.7599876s to libmachine.API.Create "ha-437800"
	I0429 11:47:59.747796    5624 start.go:293] postStartSetup for "ha-437800-m03" (driver="hyperv")
	I0429 11:47:59.747796    5624 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:47:59.760743    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:47:59.760743    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:01.884362    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:01.884362    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:01.884362    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:04.449517    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:04.449517    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:04.450554    5624 sshutil.go:53] new ssh client: &{IP:172.26.177.113 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\id_rsa Username:docker}
	I0429 11:48:04.554020    5624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7931328s)
	I0429 11:48:04.572980    5624 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:48:04.581788    5624 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 11:48:04.581844    5624 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 11:48:04.581984    5624 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 11:48:04.583341    5624 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 11:48:04.583341    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 11:48:04.600142    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 11:48:04.620153    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 11:48:04.668576    5624 start.go:296] duration metric: took 4.9207417s for postStartSetup
	I0429 11:48:04.671227    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:06.814484    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:06.814484    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:06.814855    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:09.430083    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:09.430735    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:09.430990    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:48:09.433320    5624 start.go:128] duration metric: took 2m7.4486755s to createHost
	I0429 11:48:09.433530    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:11.569737    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:11.569737    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:11.570760    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:14.187728    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:14.188594    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:14.195144    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:48:14.195561    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:48:14.195648    5624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 11:48:14.314645    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714391294.307573355
	
	I0429 11:48:14.314645    5624 fix.go:216] guest clock: 1714391294.307573355
	I0429 11:48:14.314645    5624 fix.go:229] Guest: 2024-04-29 11:48:14.307573355 +0000 UTC Remote: 2024-04-29 11:48:09.4334711 +0000 UTC m=+573.120707401 (delta=4.874102255s)
	I0429 11:48:14.314645    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:16.456121    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:16.456266    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:16.456396    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:19.086912    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:19.086912    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:19.094693    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:48:19.094693    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:48:19.094693    5624 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714391294
	I0429 11:48:19.239356    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 11:48:14 UTC 2024
	
	I0429 11:48:19.239449    5624 fix.go:236] clock set: Mon Apr 29 11:48:14 UTC 2024
	 (err=<nil>)
	I0429 11:48:19.239449    5624 start.go:83] releasing machines lock for "ha-437800-m03", held for 2m17.2552844s
	I0429 11:48:19.239672    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:21.362263    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:21.362263    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:21.362906    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:23.943534    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:23.944566    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:23.951696    5624 out.go:177] * Found network options:
	I0429 11:48:23.955390    5624 out.go:177]   - NO_PROXY=172.26.176.3,172.26.185.80
	W0429 11:48:23.957968    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 11:48:23.957968    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 11:48:23.960050    5624 out.go:177]   - NO_PROXY=172.26.176.3,172.26.185.80
	W0429 11:48:23.962993    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 11:48:23.962993    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 11:48:23.963993    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 11:48:23.963993    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 11:48:23.967383    5624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:48:23.967592    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:23.979340    5624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 11:48:23.979340    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:26.178556    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:26.178556    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:26.178685    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:26.178941    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:26.178941    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:26.179070    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:28.883720    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:28.884563    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:28.884884    5624 sshutil.go:53] new ssh client: &{IP:172.26.177.113 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\id_rsa Username:docker}
	I0429 11:48:28.912393    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:28.912508    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:28.913175    5624 sshutil.go:53] new ssh client: &{IP:172.26.177.113 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\id_rsa Username:docker}
	I0429 11:48:29.207261    5624 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2278798s)
	W0429 11:48:29.207367    5624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 11:48:29.207367    5624 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2398468s)
	I0429 11:48:29.225176    5624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:48:29.257993    5624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 11:48:29.257993    5624 start.go:494] detecting cgroup driver to use...
	I0429 11:48:29.257993    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:48:29.307882    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 11:48:29.342536    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 11:48:29.363578    5624 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 11:48:29.376599    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 11:48:29.412496    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:48:29.446572    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 11:48:29.480855    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:48:29.515753    5624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:48:29.549677    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 11:48:29.585777    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 11:48:29.622568    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 11:48:29.662728    5624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:48:29.697682    5624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:48:29.731627    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:48:29.953855    5624 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 11:48:29.989119    5624 start.go:494] detecting cgroup driver to use...
	I0429 11:48:30.002975    5624 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 11:48:30.045795    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:48:30.086990    5624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:48:30.142099    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:48:30.185868    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:48:30.223663    5624 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 11:48:30.293850    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:48:30.323682    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:48:30.376694    5624 ssh_runner.go:195] Run: which cri-dockerd
	I0429 11:48:30.396234    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 11:48:30.414468    5624 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 11:48:30.464785    5624 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 11:48:30.685362    5624 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 11:48:30.884801    5624 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 11:48:30.884930    5624 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 11:48:30.937469    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:48:31.159997    5624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 11:48:33.786003    5624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6259846s)
	I0429 11:48:33.799736    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 11:48:33.840506    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 11:48:33.878103    5624 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 11:48:34.106428    5624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 11:48:34.323202    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:48:34.559839    5624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 11:48:34.607452    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 11:48:34.651341    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:48:34.875385    5624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 11:48:34.989218    5624 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 11:48:35.005720    5624 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 11:48:35.014247    5624 start.go:562] Will wait 60s for crictl version
	I0429 11:48:35.028741    5624 ssh_runner.go:195] Run: which crictl
	I0429 11:48:35.049460    5624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 11:48:35.114709    5624 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 11:48:35.124732    5624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 11:48:35.169763    5624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 11:48:35.211496    5624 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 11:48:35.215480    5624 out.go:177]   - env NO_PROXY=172.26.176.3
	I0429 11:48:35.221141    5624 out.go:177]   - env NO_PROXY=172.26.176.3,172.26.185.80
	I0429 11:48:35.222988    5624 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 11:48:35.226995    5624 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 11:48:35.227993    5624 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 11:48:35.227993    5624 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 11:48:35.227993    5624 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:da:8e:53 Flags:up|broadcast|multicast|running}
	I0429 11:48:35.230702    5624 ip.go:210] interface addr: fe80::e4d4:6d70:21fb:68f3/64
	I0429 11:48:35.230702    5624 ip.go:210] interface addr: 172.26.176.1/20
	I0429 11:48:35.244718    5624 ssh_runner.go:195] Run: grep 172.26.176.1	host.minikube.internal$ /etc/hosts
	I0429 11:48:35.252357    5624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:48:35.279890    5624 mustload.go:65] Loading cluster: ha-437800
	I0429 11:48:35.280627    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:48:35.281075    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:48:37.407722    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:37.407722    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:37.408809    5624 host.go:66] Checking if "ha-437800" exists ...
	I0429 11:48:37.409554    5624 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800 for IP: 172.26.177.113
	I0429 11:48:37.409554    5624 certs.go:194] generating shared ca certs ...
	I0429 11:48:37.409554    5624 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:48:37.409872    5624 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 11:48:37.410549    5624 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 11:48:37.410989    5624 certs.go:256] generating profile certs ...
	I0429 11:48:37.411597    5624 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.key
	I0429 11:48:37.411745    5624 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.5e79a387
	I0429 11:48:37.411876    5624 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.5e79a387 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.26.176.3 172.26.185.80 172.26.177.113 172.26.191.254]
	I0429 11:48:37.985473    5624 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.5e79a387 ...
	I0429 11:48:37.985473    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.5e79a387: {Name:mk8f284536de05666171e9d2eb24ea992ac72bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:48:37.987600    5624 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.5e79a387 ...
	I0429 11:48:37.987600    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.5e79a387: {Name:mk2c8d4a06d020bda3f33fab6a0deb8a93c9ba22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:48:37.988713    5624 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.5e79a387 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt
	I0429 11:48:38.000248    5624 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.5e79a387 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key
	I0429 11:48:38.001918    5624 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key
	I0429 11:48:38.001918    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 11:48:38.002623    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 11:48:38.002839    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 11:48:38.002839    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 11:48:38.002839    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 11:48:38.003363    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 11:48:38.003439    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 11:48:38.003439    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 11:48:38.004187    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem (1338 bytes)
	W0429 11:48:38.004384    5624 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496_empty.pem, impossibly tiny 0 bytes
	I0429 11:48:38.004384    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0429 11:48:38.004384    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 11:48:38.005034    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 11:48:38.005034    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 11:48:38.005671    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem (1708 bytes)
	I0429 11:48:38.005671    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /usr/share/ca-certificates/84962.pem
	I0429 11:48:38.006231    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:48:38.006402    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem -> /usr/share/ca-certificates/8496.pem
	I0429 11:48:38.006817    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:48:40.208926    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:40.208926    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:40.209398    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:42.847465    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:48:42.847465    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:42.848280    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:48:42.948236    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 11:48:42.957789    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 11:48:42.998369    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 11:48:43.006810    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0429 11:48:43.044908    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 11:48:43.053782    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 11:48:43.090147    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 11:48:43.099041    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0429 11:48:43.136646    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 11:48:43.144722    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 11:48:43.187102    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 11:48:43.197106    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0429 11:48:43.221711    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 11:48:43.278599    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 11:48:43.337198    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 11:48:43.388186    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 11:48:43.439253    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0429 11:48:43.489075    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 11:48:43.541788    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 11:48:43.594574    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 11:48:43.646409    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /usr/share/ca-certificates/84962.pem (1708 bytes)
	I0429 11:48:43.696192    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 11:48:43.745945    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem --> /usr/share/ca-certificates/8496.pem (1338 bytes)
	I0429 11:48:43.797843    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 11:48:43.829408    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0429 11:48:43.865585    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 11:48:43.900330    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0429 11:48:43.935358    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 11:48:43.973524    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0429 11:48:44.008647    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 11:48:44.069863    5624 ssh_runner.go:195] Run: openssl version
	I0429 11:48:44.093850    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 11:48:44.129529    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:48:44.137218    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:48:44.150395    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:48:44.171446    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 11:48:44.208595    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8496.pem && ln -fs /usr/share/ca-certificates/8496.pem /etc/ssl/certs/8496.pem"
	I0429 11:48:44.245233    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8496.pem
	I0429 11:48:44.254607    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 11:48:44.268606    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8496.pem
	I0429 11:48:44.293153    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8496.pem /etc/ssl/certs/51391683.0"
	I0429 11:48:44.334059    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84962.pem && ln -fs /usr/share/ca-certificates/84962.pem /etc/ssl/certs/84962.pem"
	I0429 11:48:44.373980    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84962.pem
	I0429 11:48:44.382159    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 11:48:44.395190    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84962.pem
	I0429 11:48:44.420888    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84962.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 11:48:44.458515    5624 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 11:48:44.466343    5624 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 11:48:44.466669    5624 kubeadm.go:928] updating node {m03 172.26.177.113 8443 v1.30.0 docker true true} ...
	I0429 11:48:44.466967    5624 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-437800-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.177.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 11:48:44.467107    5624 kube-vip.go:111] generating kube-vip config ...
	I0429 11:48:44.482880    5624 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 11:48:44.514531    5624 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 11:48:44.514649    5624 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.26.191.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 11:48:44.530620    5624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 11:48:44.554250    5624 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 11:48:44.568402    5624 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 11:48:44.591286    5624 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 11:48:44.591286    5624 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0429 11:48:44.591286    5624 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0429 11:48:44.591286    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 11:48:44.591286    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 11:48:44.608026    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:48:44.609600    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 11:48:44.609600    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 11:48:44.636797    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 11:48:44.636797    5624 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 11:48:44.637077    5624 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 11:48:44.637119    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 11:48:44.637216    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 11:48:44.650825    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 11:48:44.720177    5624 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 11:48:44.720177    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 11:48:46.020766    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 11:48:46.045009    5624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0429 11:48:46.086515    5624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 11:48:46.122290    5624 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 11:48:46.169395    5624 ssh_runner.go:195] Run: grep 172.26.191.254	control-plane.minikube.internal$ /etc/hosts
	I0429 11:48:46.177101    5624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.191.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:48:46.217764    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:48:46.429770    5624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:48:46.464260    5624 host.go:66] Checking if "ha-437800" exists ...
	I0429 11:48:46.465051    5624 start.go:316] joinCluster: &{Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.185.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.26.177.113 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:48:46.465215    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 11:48:46.465294    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:48:48.619574    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:48.619574    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:48.619574    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:51.260864    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:48:51.260864    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:51.260864    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:48:51.475822    5624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0105685s)
	I0429 11:48:51.475977    5624 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.26.177.113 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:48:51.475977    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 506mov.idrjb78fiqa494du --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-437800-m03 --control-plane --apiserver-advertise-address=172.26.177.113 --apiserver-bind-port=8443"
	I0429 11:49:36.038236    5624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 506mov.idrjb78fiqa494du --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-437800-m03 --control-plane --apiserver-advertise-address=172.26.177.113 --apiserver-bind-port=8443": (44.5617936s)
	I0429 11:49:36.038361    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 11:49:36.869696    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-437800-m03 minikube.k8s.io/updated_at=2024_04_29T11_49_36_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d minikube.k8s.io/name=ha-437800 minikube.k8s.io/primary=false
	I0429 11:49:37.053684    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-437800-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 11:49:37.217226    5624 start.go:318] duration metric: took 50.751777s to joinCluster
	I0429 11:49:37.217226    5624 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.26.177.113 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:49:37.218398    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:49:37.222225    5624 out.go:177] * Verifying Kubernetes components...
	I0429 11:49:37.237511    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:49:37.601991    5624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:49:37.635628    5624 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:49:37.636430    5624 kapi.go:59] client config for ha-437800: &rest.Config{Host:"https://172.26.191.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-437800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-437800\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 11:49:37.636567    5624 kubeadm.go:477] Overriding stale ClientConfig host https://172.26.191.254:8443 with https://172.26.176.3:8443
	I0429 11:49:37.637433    5624 node_ready.go:35] waiting up to 6m0s for node "ha-437800-m03" to be "Ready" ...
	I0429 11:49:37.637756    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:37.637756    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:37.637756    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:37.637756    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:37.651755    5624 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 11:49:38.152578    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:38.152578    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:38.152578    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:38.152578    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:38.157168    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:38.642229    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:38.642229    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:38.642486    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:38.642486    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:38.649066    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:49:39.146439    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:39.146496    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:39.146496    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:39.146496    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:39.150379    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:49:39.652743    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:39.652743    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:39.652818    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:39.652818    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:39.657213    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:39.659818    5624 node_ready.go:53] node "ha-437800-m03" has status "Ready":"False"
	I0429 11:49:40.143391    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:40.143391    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:40.143622    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:40.143622    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:40.148081    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:40.648885    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:40.648885    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:40.648885    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:40.648885    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:40.654846    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:41.138051    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:41.138164    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:41.138164    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:41.138164    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:41.143519    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:41.642216    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:41.642216    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:41.642216    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:41.642216    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:41.647897    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:42.145757    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:42.146023    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:42.146023    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:42.146023    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:42.150650    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:42.151813    5624 node_ready.go:53] node "ha-437800-m03" has status "Ready":"False"
	I0429 11:49:42.651964    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:42.652095    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:42.652095    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:42.652095    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:42.660515    5624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 11:49:43.140284    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:43.140284    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:43.140284    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:43.140284    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:43.248924    5624 round_trippers.go:574] Response Status: 200 OK in 108 milliseconds
	I0429 11:49:43.642727    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:43.642727    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:43.642727    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:43.642727    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:43.647206    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:44.146188    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:44.146188    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:44.146188    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:44.146188    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:44.203686    5624 round_trippers.go:574] Response Status: 200 OK in 57 milliseconds
	I0429 11:49:44.205727    5624 node_ready.go:53] node "ha-437800-m03" has status "Ready":"False"
	I0429 11:49:44.650910    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:44.650910    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:44.650910    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:44.650910    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:44.662258    5624 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 11:49:45.151704    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:45.151704    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:45.151924    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:45.151924    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:45.167548    5624 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 11:49:45.642135    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:45.642135    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:45.642135    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:45.642135    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:45.648179    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:46.144863    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:46.144959    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:46.144959    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:46.144959    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:46.149829    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:46.649758    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:46.649758    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:46.649758    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:46.649758    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:46.654765    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:46.655741    5624 node_ready.go:53] node "ha-437800-m03" has status "Ready":"False"
	I0429 11:49:47.142009    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:47.142239    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:47.142239    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:47.142239    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:47.147501    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:47.646447    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:47.646447    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:47.646447    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:47.646447    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:47.652147    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:48.147412    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:48.147412    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:48.147503    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:48.147503    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:48.157026    5624 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 11:49:48.651054    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:48.651166    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:48.651166    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:48.651166    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:48.655454    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:48.656966    5624 node_ready.go:53] node "ha-437800-m03" has status "Ready":"False"
	I0429 11:49:49.150180    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:49.150358    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:49.150358    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:49.150358    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:49.155141    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:49.650360    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:49.650610    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:49.650610    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:49.650610    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:49.659599    5624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 11:49:50.152440    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:50.152440    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.152440    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.152440    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.158076    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:50.158891    5624 node_ready.go:49] node "ha-437800-m03" has status "Ready":"True"
	I0429 11:49:50.158960    5624 node_ready.go:38] duration metric: took 12.5213623s for node "ha-437800-m03" to be "Ready" ...
	I0429 11:49:50.158960    5624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:49:50.159027    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:49:50.159027    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.159027    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.159027    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.170847    5624 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 11:49:50.179668    5624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vvf4j" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.179668    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vvf4j
	I0429 11:49:50.179668    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.179668    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.179668    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.185670    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:49:50.186727    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:50.186727    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.186727    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.186727    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.191665    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:50.191665    5624 pod_ready.go:92] pod "coredns-7db6d8ff4d-vvf4j" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:50.191665    5624 pod_ready.go:81] duration metric: took 11.9965ms for pod "coredns-7db6d8ff4d-vvf4j" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.191665    5624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxvcx" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.191665    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zxvcx
	I0429 11:49:50.191665    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.191665    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.191665    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.195720    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:50.196665    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:50.196665    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.196665    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.196665    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.200649    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:49:50.201716    5624 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxvcx" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:50.201716    5624 pod_ready.go:81] duration metric: took 10.051ms for pod "coredns-7db6d8ff4d-zxvcx" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.201716    5624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.201716    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800
	I0429 11:49:50.201716    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.201716    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.201716    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.204676    5624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 11:49:50.205653    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:50.205653    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.205653    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.205653    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.209665    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:50.210658    5624 pod_ready.go:92] pod "etcd-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:50.210658    5624 pod_ready.go:81] duration metric: took 8.9415ms for pod "etcd-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.210658    5624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.210658    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:49:50.210658    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.210658    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.210658    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.213669    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:49:50.215211    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:50.215211    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.215211    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.215211    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.219818    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:50.221083    5624 pod_ready.go:92] pod "etcd-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:50.221083    5624 pod_ready.go:81] duration metric: took 10.4253ms for pod "etcd-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.221083    5624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.357368    5624 request.go:629] Waited for 136.2836ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m03
	I0429 11:49:50.357902    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m03
	I0429 11:49:50.357976    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.357976    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.357976    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.364335    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:49:50.562194    5624 request.go:629] Waited for 196.3432ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:50.562194    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:50.562194    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.562194    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.562194    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.567100    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:50.568606    5624 pod_ready.go:92] pod "etcd-ha-437800-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:50.568606    5624 pod_ready.go:81] duration metric: took 347.5205ms for pod "etcd-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.568606    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.765156    5624 request.go:629] Waited for 196.3034ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800
	I0429 11:49:50.765293    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800
	I0429 11:49:50.765293    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.765293    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.765293    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.770665    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:50.953784    5624 request.go:629] Waited for 180.9009ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:50.953883    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:50.953883    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.953883    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.953883    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.959392    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:50.960580    5624 pod_ready.go:92] pod "kube-apiserver-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:50.960580    5624 pod_ready.go:81] duration metric: took 391.9709ms for pod "kube-apiserver-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.960580    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:51.157035    5624 request.go:629] Waited for 195.6473ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800-m02
	I0429 11:49:51.157295    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800-m02
	I0429 11:49:51.157295    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:51.157295    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:51.157295    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:51.174304    5624 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0429 11:49:51.361375    5624 request.go:629] Waited for 184.594ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:51.361642    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:51.361642    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:51.361642    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:51.361642    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:51.371599    5624 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 11:49:51.372293    5624 pod_ready.go:92] pod "kube-apiserver-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:51.372293    5624 pod_ready.go:81] duration metric: took 411.7094ms for pod "kube-apiserver-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:51.372293    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:51.562808    5624 request.go:629] Waited for 190.3945ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800-m03
	I0429 11:49:51.563020    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800-m03
	I0429 11:49:51.563020    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:51.563020    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:51.563020    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:51.571023    5624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 11:49:51.767691    5624 request.go:629] Waited for 195.7151ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:51.767691    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:51.767691    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:51.767691    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:51.767691    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:51.771377    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:49:51.772692    5624 pod_ready.go:92] pod "kube-apiserver-ha-437800-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:51.772905    5624 pod_ready.go:81] duration metric: took 400.3265ms for pod "kube-apiserver-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:51.772905    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:51.961458    5624 request.go:629] Waited for 188.2851ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800
	I0429 11:49:51.961548    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800
	I0429 11:49:51.961548    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:51.961548    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:51.961548    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:51.966447    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:49:52.166287    5624 request.go:629] Waited for 198.253ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:52.166560    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:52.166560    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:52.166560    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:52.166622    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:52.170954    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:52.172041    5624 pod_ready.go:92] pod "kube-controller-manager-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:52.172041    5624 pod_ready.go:81] duration metric: took 399.0662ms for pod "kube-controller-manager-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:52.172041    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:52.353242    5624 request.go:629] Waited for 181.0336ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800-m02
	I0429 11:49:52.353437    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800-m02
	I0429 11:49:52.353502    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:52.353502    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:52.353502    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:52.359280    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:52.557760    5624 request.go:629] Waited for 196.655ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:52.557760    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:52.557958    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:52.557958    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:52.558009    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:52.562768    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:52.564210    5624 pod_ready.go:92] pod "kube-controller-manager-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:52.564270    5624 pod_ready.go:81] duration metric: took 392.2261ms for pod "kube-controller-manager-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:52.564270    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:52.759774    5624 request.go:629] Waited for 195.259ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800-m03
	I0429 11:49:52.759774    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800-m03
	I0429 11:49:52.759976    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:52.759976    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:52.759976    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:52.767012    5624 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 11:49:52.961408    5624 request.go:629] Waited for 193.4518ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:52.961585    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:52.961756    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:52.961818    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:52.961818    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:52.967463    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:52.968920    5624 pod_ready.go:92] pod "kube-controller-manager-ha-437800-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:52.968920    5624 pod_ready.go:81] duration metric: took 404.6467ms for pod "kube-controller-manager-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:52.968920    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2tjfd" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:53.163020    5624 request.go:629] Waited for 193.8936ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2tjfd
	I0429 11:49:53.163212    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2tjfd
	I0429 11:49:53.163212    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:53.163212    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:53.163212    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:53.170074    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:49:53.366408    5624 request.go:629] Waited for 194.5578ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:53.366620    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:53.366620    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:53.366675    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:53.366675    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:53.375248    5624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 11:49:53.377274    5624 pod_ready.go:92] pod "kube-proxy-2tjfd" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:53.377361    5624 pod_ready.go:81] duration metric: took 408.4386ms for pod "kube-proxy-2tjfd" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:53.377361    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hvzz9" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:53.553033    5624 request.go:629] Waited for 175.573ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hvzz9
	I0429 11:49:53.553255    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hvzz9
	I0429 11:49:53.553255    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:53.553255    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:53.553365    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:53.560459    5624 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 11:49:53.755715    5624 request.go:629] Waited for 194.2778ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:53.756052    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:53.756052    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:53.756052    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:53.756052    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:53.761246    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:53.762903    5624 pod_ready.go:92] pod "kube-proxy-hvzz9" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:53.762988    5624 pod_ready.go:81] duration metric: took 385.6239ms for pod "kube-proxy-hvzz9" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:53.762988    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pzfjr" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:53.958079    5624 request.go:629] Waited for 195.0082ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzfjr
	I0429 11:49:53.958510    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzfjr
	I0429 11:49:53.958610    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:53.958610    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:53.958610    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:53.964497    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:54.160852    5624 request.go:629] Waited for 195.234ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:54.160919    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:54.161034    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:54.161034    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:54.161034    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:54.165681    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:54.167413    5624 pod_ready.go:92] pod "kube-proxy-pzfjr" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:54.167413    5624 pod_ready.go:81] duration metric: took 404.4217ms for pod "kube-proxy-pzfjr" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:54.167413    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:54.364943    5624 request.go:629] Waited for 197.3068ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800
	I0429 11:49:54.365099    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800
	I0429 11:49:54.365099    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:54.365099    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:54.365099    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:54.369547    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:54.566485    5624 request.go:629] Waited for 195.4733ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:54.566695    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:54.566940    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:54.566940    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:54.566940    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:54.571353    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:54.572347    5624 pod_ready.go:92] pod "kube-scheduler-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:54.572347    5624 pod_ready.go:81] duration metric: took 404.9307ms for pod "kube-scheduler-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:54.572347    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:54.753063    5624 request.go:629] Waited for 180.5639ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800-m02
	I0429 11:49:54.753279    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800-m02
	I0429 11:49:54.753279    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:54.753279    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:54.753279    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:54.762860    5624 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 11:49:54.956613    5624 request.go:629] Waited for 192.5405ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:54.956866    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:54.956866    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:54.956866    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:54.956866    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:54.961931    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:54.963366    5624 pod_ready.go:92] pod "kube-scheduler-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:54.963366    5624 pod_ready.go:81] duration metric: took 391.0157ms for pod "kube-scheduler-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:54.963366    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:55.159572    5624 request.go:629] Waited for 195.7577ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800-m03
	I0429 11:49:55.159878    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800-m03
	I0429 11:49:55.160014    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:55.160014    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:55.160014    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:55.165654    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:55.362070    5624 request.go:629] Waited for 194.4042ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:55.362143    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:55.362143    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:55.362143    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:55.362143    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:55.369968    5624 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 11:49:55.372630    5624 pod_ready.go:92] pod "kube-scheduler-ha-437800-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:55.372697    5624 pod_ready.go:81] duration metric: took 409.3285ms for pod "kube-scheduler-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:55.372777    5624 pod_ready.go:38] duration metric: took 5.2137759s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:49:55.372777    5624 api_server.go:52] waiting for apiserver process to appear ...
	I0429 11:49:55.387731    5624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 11:49:55.417751    5624 api_server.go:72] duration metric: took 18.2003297s to wait for apiserver process to appear ...
	I0429 11:49:55.417751    5624 api_server.go:88] waiting for apiserver healthz status ...
	I0429 11:49:55.417751    5624 api_server.go:253] Checking apiserver healthz at https://172.26.176.3:8443/healthz ...
	I0429 11:49:55.426551    5624 api_server.go:279] https://172.26.176.3:8443/healthz returned 200:
	ok
	I0429 11:49:55.427092    5624 round_trippers.go:463] GET https://172.26.176.3:8443/version
	I0429 11:49:55.427092    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:55.427092    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:55.427092    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:55.429067    5624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 11:49:55.429067    5624 api_server.go:141] control plane version: v1.30.0
	I0429 11:49:55.429067    5624 api_server.go:131] duration metric: took 11.3165ms to wait for apiserver health ...
	I0429 11:49:55.429067    5624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 11:49:55.565553    5624 request.go:629] Waited for 136.2445ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:49:55.565752    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:49:55.565752    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:55.565752    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:55.565752    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:55.577472    5624 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 11:49:55.590022    5624 system_pods.go:59] 24 kube-system pods found
	I0429 11:49:55.590022    5624 system_pods.go:61] "coredns-7db6d8ff4d-vvf4j" [cc00761a-60fb-4c04-9502-c0aa8b88e45a] Running
	I0429 11:49:55.590022    5624 system_pods.go:61] "coredns-7db6d8ff4d-zxvcx" [7f8c7504-7c8b-4d15-bcb0-63320257debc] Running
	I0429 11:49:55.590022    5624 system_pods.go:61] "etcd-ha-437800" [4c2ad87e-0a97-4414-bc1c-30c4d5d5b58f] Running
	I0429 11:49:55.590022    5624 system_pods.go:61] "etcd-ha-437800-m02" [9bd90d2f-eaff-4f49-acac-669292904ac9] Running
	I0429 11:49:55.590022    5624 system_pods.go:61] "etcd-ha-437800-m03" [fba838a1-ccbb-4d11-8f65-54f6a134946e] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kindnet-7cn9p" [7eb5ba76-640d-4092-abb9-dd1b95d5f39d] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kindnet-qg7qh" [cba63805-bae0-48e9-93b5-7ed38b14846f] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kindnet-qgbzf" [8e86dd3b-eb48-4bd5-a3f8-38f53d7c2bd8] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-apiserver-ha-437800" [21394aa6-39d0-40b0-9335-e618e86ccbd5] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-apiserver-ha-437800-m02" [167ef62e-bb21-4605-b821-f469de4aedf5] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-apiserver-ha-437800-m03" [8e35959a-f76f-4f30-8536-7205acdf70a1] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-controller-manager-ha-437800" [5233d18d-4b1a-4846-84c5-08043f05cd40] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-controller-manager-ha-437800-m02" [881ec6cd-768c-46f0-b10f-56f2a33172f3] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-controller-manager-ha-437800-m03" [370a7b65-2d41-4f57-8c9c-418e0ebc24cb] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-proxy-2tjfd" [ce4ffe20-47ae-438d-ad34-e2d2e06eda4f] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-proxy-hvzz9" [ea3045a9-bcea-4757-80a4-70361f030a6b] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-proxy-pzfjr" [69ec7440-fd5b-4cee-8c37-e4a610b48570] Running
	I0429 11:49:55.590715    5624 system_pods.go:61] "kube-scheduler-ha-437800" [db1d725b-2fe3-4ff5-960d-48498bd58597] Running
	I0429 11:49:55.590715    5624 system_pods.go:61] "kube-scheduler-ha-437800-m02" [97b2e475-ff85-4601-8ded-f8e759fee82f] Running
	I0429 11:49:55.590715    5624 system_pods.go:61] "kube-scheduler-ha-437800-m03" [fde709a1-d79f-42fd-adf8-d2b60995c8f3] Running
	I0429 11:49:55.590715    5624 system_pods.go:61] "kube-vip-ha-437800" [b777794b-764c-42d5-8a96-2463488c0738] Running
	I0429 11:49:55.590747    5624 system_pods.go:61] "kube-vip-ha-437800-m02" [ed988926-35c5-4fb8-9e43-f50960fa81aa] Running
	I0429 11:49:55.590747    5624 system_pods.go:61] "kube-vip-ha-437800-m03" [5b4aa283-605d-45db-aaa4-cf75723a2870] Running
	I0429 11:49:55.590747    5624 system_pods.go:61] "storage-provisioner" [f3b60672-2de9-4a05-86cc-b3b7ed019410] Running
	I0429 11:49:55.590747    5624 system_pods.go:74] duration metric: took 161.6786ms to wait for pod list to return data ...
	I0429 11:49:55.590747    5624 default_sa.go:34] waiting for default service account to be created ...
	I0429 11:49:55.765505    5624 request.go:629] Waited for 174.7564ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/default/serviceaccounts
	I0429 11:49:55.765792    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/default/serviceaccounts
	I0429 11:49:55.765792    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:55.765792    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:55.765792    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:55.782371    5624 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 11:49:55.782975    5624 default_sa.go:45] found service account: "default"
	I0429 11:49:55.783036    5624 default_sa.go:55] duration metric: took 192.2262ms for default service account to be created ...
	I0429 11:49:55.783036    5624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 11:49:55.954809    5624 request.go:629] Waited for 171.4128ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:49:55.954895    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:49:55.954957    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:55.954957    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:55.954957    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:55.968748    5624 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 11:49:55.980658    5624 system_pods.go:86] 24 kube-system pods found
	I0429 11:49:55.980721    5624 system_pods.go:89] "coredns-7db6d8ff4d-vvf4j" [cc00761a-60fb-4c04-9502-c0aa8b88e45a] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "coredns-7db6d8ff4d-zxvcx" [7f8c7504-7c8b-4d15-bcb0-63320257debc] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "etcd-ha-437800" [4c2ad87e-0a97-4414-bc1c-30c4d5d5b58f] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "etcd-ha-437800-m02" [9bd90d2f-eaff-4f49-acac-669292904ac9] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "etcd-ha-437800-m03" [fba838a1-ccbb-4d11-8f65-54f6a134946e] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kindnet-7cn9p" [7eb5ba76-640d-4092-abb9-dd1b95d5f39d] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kindnet-qg7qh" [cba63805-bae0-48e9-93b5-7ed38b14846f] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kindnet-qgbzf" [8e86dd3b-eb48-4bd5-a3f8-38f53d7c2bd8] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-apiserver-ha-437800" [21394aa6-39d0-40b0-9335-e618e86ccbd5] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-apiserver-ha-437800-m02" [167ef62e-bb21-4605-b821-f469de4aedf5] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-apiserver-ha-437800-m03" [8e35959a-f76f-4f30-8536-7205acdf70a1] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-controller-manager-ha-437800" [5233d18d-4b1a-4846-84c5-08043f05cd40] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-controller-manager-ha-437800-m02" [881ec6cd-768c-46f0-b10f-56f2a33172f3] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-controller-manager-ha-437800-m03" [370a7b65-2d41-4f57-8c9c-418e0ebc24cb] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-proxy-2tjfd" [ce4ffe20-47ae-438d-ad34-e2d2e06eda4f] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-proxy-hvzz9" [ea3045a9-bcea-4757-80a4-70361f030a6b] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-proxy-pzfjr" [69ec7440-fd5b-4cee-8c37-e4a610b48570] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-scheduler-ha-437800" [db1d725b-2fe3-4ff5-960d-48498bd58597] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-scheduler-ha-437800-m02" [97b2e475-ff85-4601-8ded-f8e759fee82f] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-scheduler-ha-437800-m03" [fde709a1-d79f-42fd-adf8-d2b60995c8f3] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-vip-ha-437800" [b777794b-764c-42d5-8a96-2463488c0738] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-vip-ha-437800-m02" [ed988926-35c5-4fb8-9e43-f50960fa81aa] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-vip-ha-437800-m03" [5b4aa283-605d-45db-aaa4-cf75723a2870] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "storage-provisioner" [f3b60672-2de9-4a05-86cc-b3b7ed019410] Running
	I0429 11:49:55.980721    5624 system_pods.go:126] duration metric: took 197.6827ms to wait for k8s-apps to be running ...
	I0429 11:49:55.980721    5624 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 11:49:55.995953    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:49:56.025882    5624 system_svc.go:56] duration metric: took 45.1616ms WaitForService to wait for kubelet
	I0429 11:49:56.026001    5624 kubeadm.go:576] duration metric: took 18.8086275s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 11:49:56.026001    5624 node_conditions.go:102] verifying NodePressure condition ...
	I0429 11:49:56.158833    5624 request.go:629] Waited for 132.7647ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes
	I0429 11:49:56.158960    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes
	I0429 11:49:56.158960    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:56.159043    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:56.159043    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:56.165344    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:56.167126    5624 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 11:49:56.167260    5624 node_conditions.go:123] node cpu capacity is 2
	I0429 11:49:56.167260    5624 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 11:49:56.167260    5624 node_conditions.go:123] node cpu capacity is 2
	I0429 11:49:56.167340    5624 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 11:49:56.167340    5624 node_conditions.go:123] node cpu capacity is 2
	I0429 11:49:56.167340    5624 node_conditions.go:105] duration metric: took 141.3382ms to run NodePressure ...
	I0429 11:49:56.167470    5624 start.go:240] waiting for startup goroutines ...
	I0429 11:49:56.167470    5624 start.go:254] writing updated cluster config ...
	I0429 11:49:56.182701    5624 ssh_runner.go:195] Run: rm -f paused
	I0429 11:49:56.343181    5624 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 11:49:56.346724    5624 out.go:177] * Done! kubectl is now configured to use "ha-437800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 29 11:42:11 ha-437800 cri-dockerd[1222]: time="2024-04-29T11:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ccb6f28cb9dd6e06354aec4a3126e1b35a4300e6fd0c1940adfa8d1d3c37371d/resolv.conf as [nameserver 172.26.176.1]"
	Apr 29 11:42:11 ha-437800 cri-dockerd[1222]: time="2024-04-29T11:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fca08318dd69fc8786da1b473f674e6397d9b8f040f141a011288e2a92fd077a/resolv.conf as [nameserver 172.26.176.1]"
	Apr 29 11:42:11 ha-437800 cri-dockerd[1222]: time="2024-04-29T11:42:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/75579f7022d4cb184c0482a3d809c4402bf9e835e52a30133ca4ee45bc5dcb2f/resolv.conf as [nameserver 172.26.176.1]"
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.441862079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.442250380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.443010582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.444028285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.550276587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.550473987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.550496088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.550643088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.600847231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.601231032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.601432232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.601803833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:50:35 ha-437800 dockerd[1322]: time="2024-04-29T11:50:35.059624103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:50:35 ha-437800 dockerd[1322]: time="2024-04-29T11:50:35.060225905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:50:35 ha-437800 dockerd[1322]: time="2024-04-29T11:50:35.060252605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:50:35 ha-437800 dockerd[1322]: time="2024-04-29T11:50:35.060501005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:50:35 ha-437800 cri-dockerd[1222]: time="2024-04-29T11:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/683f67e5fac4a33e11059922b81272badb370df8d76464f94848a3495a78bf04/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 11:50:36 ha-437800 cri-dockerd[1222]: time="2024-04-29T11:50:36Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 11:50:36 ha-437800 dockerd[1322]: time="2024-04-29T11:50:36.912861006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:50:36 ha-437800 dockerd[1322]: time="2024-04-29T11:50:36.913033008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:50:36 ha-437800 dockerd[1322]: time="2024-04-29T11:50:36.913056208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:50:36 ha-437800 dockerd[1322]: time="2024-04-29T11:50:36.913192409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2d097abf5af66       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   683f67e5fac4a       busybox-fc5497c4f-kxn7k
	5a273ec673a42       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   75579f7022d4c       storage-provisioner
	7e21b812f1ccd       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   fca08318dd69f       coredns-7db6d8ff4d-vvf4j
	376e44d9bafd3       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   ccb6f28cb9dd6       coredns-7db6d8ff4d-zxvcx
	22e486515eda5       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              9 minutes ago        Running             kindnet-cni               0                   701497cc8b03d       kindnet-qgbzf
	c6c05f014af2c       a0bf559e280cf                                                                                         9 minutes ago        Running             kube-proxy                0                   dd04e5743865e       kube-proxy-hvzz9
	d059ac8fe4753       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     10 minutes ago       Running             kube-vip                  0                   8db90fd8b8711       kube-vip-ha-437800
	2ff176e30ec62       259c8277fcbbc                                                                                         10 minutes ago       Running             kube-scheduler            0                   052d202dd54e8       kube-scheduler-ha-437800
	ad03ce97e2dbf       c42f13656d0b2                                                                                         10 minutes ago       Running             kube-apiserver            0                   d79e4ee79205f       kube-apiserver-ha-437800
	752b474aaa312       c7aad43836fa5                                                                                         10 minutes ago       Running             kube-controller-manager   0                   6a224fb51b215       kube-controller-manager-ha-437800
	0084f71d1910b       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   8f19761775907       etcd-ha-437800
	
	
	==> coredns [376e44d9bafd] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52120 - 62103 "HINFO IN 8895575928499902026.9047732300977096024. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.147201501s
	[INFO] 10.244.1.2:60060 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.198192674s
	[INFO] 10.244.1.2:50095 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.06663823s
	[INFO] 10.244.0.4:43561 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000108s
	[INFO] 10.244.2.2:51887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000224502s
	[INFO] 10.244.2.2:36346 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000105901s
	[INFO] 10.244.1.2:59078 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033344663s
	[INFO] 10.244.1.2:53712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000228402s
	[INFO] 10.244.1.2:52382 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102401s
	[INFO] 10.244.0.4:54042 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014075011s
	[INFO] 10.244.0.4:33766 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000088201s
	[INFO] 10.244.0.4:46993 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155201s
	[INFO] 10.244.2.2:38110 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126601s
	[INFO] 10.244.2.2:55803 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000642s
	[INFO] 10.244.2.2:43378 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156401s
	[INFO] 10.244.1.2:56619 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107701s
	[INFO] 10.244.1.2:42654 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114501s
	[INFO] 10.244.0.4:50355 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197901s
	[INFO] 10.244.0.4:56046 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000912s
	[INFO] 10.244.0.4:58870 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147302s
	[INFO] 10.244.2.2:48053 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154202s
	[INFO] 10.244.2.2:59663 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000332402s
	[INFO] 10.244.2.2:43598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000281102s
	[INFO] 10.244.2.2:38833 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000188801s
	
	
	==> coredns [7e21b812f1cc] <==
	[INFO] 10.244.0.4:42206 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194002s
	[INFO] 10.244.0.4:54465 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000206602s
	[INFO] 10.244.0.4:59891 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107101s
	[INFO] 10.244.0.4:34920 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103601s
	[INFO] 10.244.0.4:42536 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137102s
	[INFO] 10.244.2.2:39927 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091101s
	[INFO] 10.244.2.2:52442 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011267589s
	[INFO] 10.244.2.2:53077 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186702s
	[INFO] 10.244.2.2:58533 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112701s
	[INFO] 10.244.2.2:58677 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000226402s
	[INFO] 10.244.1.2:42446 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102201s
	[INFO] 10.244.1.2:50823 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063501s
	[INFO] 10.244.0.4:48975 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000355803s
	[INFO] 10.244.2.2:47577 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112901s
	[INFO] 10.244.2.2:45113 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000230802s
	[INFO] 10.244.1.2:50322 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124301s
	[INFO] 10.244.1.2:55709 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116601s
	[INFO] 10.244.1.2:49760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159002s
	[INFO] 10.244.1.2:46786 - 5 "PTR IN 1.176.26.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097401s
	[INFO] 10.244.0.4:33276 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166501s
	[INFO] 10.244.0.4:37027 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000285402s
	[INFO] 10.244.0.4:46102 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000330103s
	[INFO] 10.244.0.4:39295 - 5 "PTR IN 1.176.26.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000736s
	[INFO] 10.244.2.2:46024 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	[INFO] 10.244.2.2:36536 - 5 "PTR IN 1.176.26.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000141201s
	
	
	==> describe nodes <==
	Name:               ha-437800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-437800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=ha-437800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T11_41_44_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 11:41:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-437800
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 11:51:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 11:50:45 +0000   Mon, 29 Apr 2024 11:41:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 11:50:45 +0000   Mon, 29 Apr 2024 11:41:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 11:50:45 +0000   Mon, 29 Apr 2024 11:41:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 11:50:45 +0000   Mon, 29 Apr 2024 11:42:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.176.3
	  Hostname:    ha-437800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 83ce417c0fca49beb91fd5a5e984cb94
	  System UUID:                ec8c47e6-30d4-a345-98f2-580804f5da63
	  Boot ID:                    1b00c75c-57fc-4c53-9736-a168a0852459
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kxn7k              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 coredns-7db6d8ff4d-vvf4j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m44s
	  kube-system                 coredns-7db6d8ff4d-zxvcx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m44s
	  kube-system                 etcd-ha-437800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m59s
	  kube-system                 kindnet-qgbzf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m44s
	  kube-system                 kube-apiserver-ha-437800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-controller-manager-ha-437800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-proxy-hvzz9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m44s
	  kube-system                 kube-scheduler-ha-437800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-vip-ha-437800                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m40s  kube-proxy       
	  Normal  Starting                 9m57s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m57s  kubelet          Node ha-437800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m57s  kubelet          Node ha-437800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m57s  kubelet          Node ha-437800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m57s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m44s  node-controller  Node ha-437800 event: Registered Node ha-437800 in Controller
	  Normal  NodeReady                9m31s  kubelet          Node ha-437800 status is now: NodeReady
	  Normal  RegisteredNode           5m41s  node-controller  Node ha-437800 event: Registered Node ha-437800 in Controller
	  Normal  RegisteredNode           109s   node-controller  Node ha-437800 event: Registered Node ha-437800 in Controller
	
	
	Name:               ha-437800-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-437800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=ha-437800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T11_45_44_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 11:45:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-437800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 11:51:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 11:50:44 +0000   Mon, 29 Apr 2024 11:45:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 11:50:44 +0000   Mon, 29 Apr 2024 11:45:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 11:50:44 +0000   Mon, 29 Apr 2024 11:45:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 11:50:44 +0000   Mon, 29 Apr 2024 11:45:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.185.80
	  Hostname:    ha-437800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c50764115df64038acb4443b3cae77d2
	  System UUID:                f0ff1baa-9620-b949-8541-c672e1b2a37d
	  Boot ID:                    22ec1ffd-e71a-47e6-b7d4-9f4db7535179
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dsnxf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-437800-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m3s
	  kube-system                 kindnet-qg7qh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m3s
	  kube-system                 kube-apiserver-ha-437800-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-controller-manager-ha-437800-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-proxy-pzfjr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-scheduler-ha-437800-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-vip-ha-437800-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m3s (x2 over 6m3s)  kubelet          Node ha-437800-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x2 over 6m3s)  kubelet          Node ha-437800-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x2 over 6m3s)  kubelet          Node ha-437800-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m59s                node-controller  Node ha-437800-m02 event: Registered Node ha-437800-m02 in Controller
	  Normal  NodeReady                5m50s                kubelet          Node ha-437800-m02 status is now: NodeReady
	  Normal  RegisteredNode           5m41s                node-controller  Node ha-437800-m02 event: Registered Node ha-437800-m02 in Controller
	  Normal  RegisteredNode           109s                 node-controller  Node ha-437800-m02 event: Registered Node ha-437800-m02 in Controller
	
	
	Name:               ha-437800-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-437800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=ha-437800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T11_49_36_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 11:49:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-437800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 11:51:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 11:51:00 +0000   Mon, 29 Apr 2024 11:49:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 11:51:00 +0000   Mon, 29 Apr 2024 11:49:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 11:51:00 +0000   Mon, 29 Apr 2024 11:49:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 11:51:00 +0000   Mon, 29 Apr 2024 11:49:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.177.113
	  Hostname:    ha-437800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b7bc4253463458e8279559d8bce36c3
	  System UUID:                78128ab4-98e9-ca40-b816-190967054531
	  Boot ID:                    fa1b1b92-c139-49e3-addb-77f8b4a64c8a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ndzvx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-437800-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m11s
	  kube-system                 kindnet-7cn9p                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m12s
	  kube-system                 kube-apiserver-ha-437800-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-controller-manager-ha-437800-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-proxy-2tjfd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-scheduler-ha-437800-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-vip-ha-437800-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m6s                   kube-proxy       
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node ha-437800-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node ha-437800-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet          Node ha-437800-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m11s                  node-controller  Node ha-437800-m03 event: Registered Node ha-437800-m03 in Controller
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-437800-m03 event: Registered Node ha-437800-m03 in Controller
	  Normal  RegisteredNode           109s                   node-controller  Node ha-437800-m03 event: Registered Node ha-437800-m03 in Controller
	
	
	==> dmesg <==
	[  +1.317566] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.085462] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 11:40] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.185652] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Apr29 11:41] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.110755] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.599914] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	[  +0.238212] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.236895] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +2.836426] systemd-fstab-generator[1175]: Ignoring "noauto" option for root device
	[  +0.247498] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.219802] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.311576] systemd-fstab-generator[1214]: Ignoring "noauto" option for root device
	[ +11.766611] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.129929] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.873927] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	[  +7.075979] systemd-fstab-generator[1715]: Ignoring "noauto" option for root device
	[  +0.112089] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.917122] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.428342] systemd-fstab-generator[2211]: Ignoring "noauto" option for root device
	[ +15.695772] kauditd_printk_skb: 17 callbacks suppressed
	[Apr29 11:42] kauditd_printk_skb: 29 callbacks suppressed
	[Apr29 11:45] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [0084f71d1910] <==
	{"level":"info","ts":"2024-04-29T11:49:33.61868Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"717e02486ecd6145","to":"e2da8f2047ddc811","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-29T11:49:33.61874Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"717e02486ecd6145","remote-peer-id":"e2da8f2047ddc811"}
	{"level":"info","ts":"2024-04-29T11:49:34.146314Z","caller":"traceutil/trace.go:171","msg":"trace[1255459359] transaction","detail":"{read_only:false; response_revision:1500; number_of_response:1; }","duration":"130.195161ms","start":"2024-04-29T11:49:34.016099Z","end":"2024-04-29T11:49:34.146295Z","steps":["trace[1255459359] 'process raft request'  (duration: 130.011961ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T11:49:34.377964Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"e2da8f2047ddc811","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-04-29T11:49:35.885165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"717e02486ecd6145 switched to configuration voters=(2905485954495457534 8177976483471253829 16346535166302078993)"}
	{"level":"info","ts":"2024-04-29T11:49:35.885676Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"7c8d00848e12c1fd","local-member-id":"717e02486ecd6145"}
	{"level":"info","ts":"2024-04-29T11:49:35.885806Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"717e02486ecd6145","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"e2da8f2047ddc811"}
	{"level":"warn","ts":"2024-04-29T11:49:43.249298Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.899106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-437800-m03\" ","response":"range_response_count:1 size:4443"}
	{"level":"info","ts":"2024-04-29T11:49:43.250534Z","caller":"traceutil/trace.go:171","msg":"trace[1105710276] range","detail":"{range_begin:/registry/minions/ha-437800-m03; range_end:; response_count:1; response_revision:1536; }","duration":"104.207409ms","start":"2024-04-29T11:49:43.146312Z","end":"2024-04-29T11:49:43.250519Z","steps":["trace[1105710276] 'range keys from in-memory index tree'  (duration: 101.271303ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T11:49:44.04585Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"e2da8f2047ddc811","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"66.585634ms"}
	{"level":"warn","ts":"2024-04-29T11:49:44.045913Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"28525c14e996a8fe","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"66.653634ms"}
	{"level":"info","ts":"2024-04-29T11:49:44.047309Z","caller":"traceutil/trace.go:171","msg":"trace[1423423427] linearizableReadLoop","detail":"{readStateIndex:1721; appliedIndex:1721; }","duration":"183.908669ms","start":"2024-04-29T11:49:43.863301Z","end":"2024-04-29T11:49:44.04721Z","steps":["trace[1423423427] 'read index received'  (duration: 183.904769ms)","trace[1423423427] 'applied index is now lower than readState.Index'  (duration: 2.8µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T11:49:44.200231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.017975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-04-29T11:49:44.200312Z","caller":"traceutil/trace.go:171","msg":"trace[1294263200] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1540; }","duration":"337.146076ms","start":"2024-04-29T11:49:43.86315Z","end":"2024-04-29T11:49:44.200296Z","steps":["trace[1294263200] 'agreement among raft nodes before linearized reading'  (duration: 184.244469ms)","trace[1294263200] 'range keys from in-memory index tree'  (duration: 152.682106ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T11:49:44.20105Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T11:49:43.863134Z","time spent":"337.899477ms","remote":"127.0.0.1:48500","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1132,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-04-29T11:49:44.201917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.886713ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12177327851611898940 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ggsrba5m443rpmrelcz5wnnh5i\" mod_revision:1496 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ggsrba5m443rpmrelcz5wnnh5i\" value_size:605 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T11:49:44.201985Z","caller":"traceutil/trace.go:171","msg":"trace[265018269] linearizableReadLoop","detail":"{readStateIndex:1722; appliedIndex:1721; }","duration":"154.414409ms","start":"2024-04-29T11:49:44.047562Z","end":"2024-04-29T11:49:44.201977Z","steps":["trace[265018269] 'read index received'  (duration: 1.492803ms)","trace[265018269] 'applied index is now lower than readState.Index'  (duration: 152.920706ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T11:49:44.203376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.759267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T11:49:44.203628Z","caller":"traceutil/trace.go:171","msg":"trace[1269792660] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:1541; }","duration":"233.078167ms","start":"2024-04-29T11:49:43.970538Z","end":"2024-04-29T11:49:44.203616Z","steps":["trace[1269792660] 'agreement among raft nodes before linearized reading'  (duration: 232.713467ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T11:49:44.356745Z","caller":"traceutil/trace.go:171","msg":"trace[1824228431] transaction","detail":"{read_only:false; response_revision:1542; number_of_response:1; }","duration":"143.859088ms","start":"2024-04-29T11:49:44.212844Z","end":"2024-04-29T11:49:44.356704Z","steps":["trace[1824228431] 'process raft request'  (duration: 143.746588ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T11:49:44.368057Z","caller":"traceutil/trace.go:171","msg":"trace[438639503] transaction","detail":"{read_only:false; response_revision:1543; number_of_response:1; }","duration":"150.650802ms","start":"2024-04-29T11:49:44.217395Z","end":"2024-04-29T11:49:44.368045Z","steps":["trace[438639503] 'process raft request'  (duration: 150.561202ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T11:49:44.368441Z","caller":"traceutil/trace.go:171","msg":"trace[311563232] transaction","detail":"{read_only:false; response_revision:1544; number_of_response:1; }","duration":"109.878021ms","start":"2024-04-29T11:49:44.258551Z","end":"2024-04-29T11:49:44.368429Z","steps":["trace[311563232] 'process raft request'  (duration: 109.45912ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T11:51:36.717239Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1028}
	{"level":"info","ts":"2024-04-29T11:51:36.8212Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1028,"took":"102.956658ms","hash":1207502939,"current-db-size-bytes":3538944,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":2080768,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-04-29T11:51:36.821647Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1207502939,"revision":1028,"compact-revision":-1}
	
	
	==> kernel <==
	 11:51:40 up 12 min,  0 users,  load average: 0.99, 0.63, 0.35
	Linux ha-437800 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [22e486515eda] <==
	I0429 11:50:57.118624       1 main.go:250] Node ha-437800-m03 has CIDR [10.244.2.0/24] 
	I0429 11:51:07.128305       1 main.go:223] Handling node with IPs: map[172.26.176.3:{}]
	I0429 11:51:07.128870       1 main.go:227] handling current node
	I0429 11:51:07.128902       1 main.go:223] Handling node with IPs: map[172.26.185.80:{}]
	I0429 11:51:07.128912       1 main.go:250] Node ha-437800-m02 has CIDR [10.244.1.0/24] 
	I0429 11:51:07.129038       1 main.go:223] Handling node with IPs: map[172.26.177.113:{}]
	I0429 11:51:07.129079       1 main.go:250] Node ha-437800-m03 has CIDR [10.244.2.0/24] 
	I0429 11:51:17.143728       1 main.go:223] Handling node with IPs: map[172.26.176.3:{}]
	I0429 11:51:17.143883       1 main.go:227] handling current node
	I0429 11:51:17.144110       1 main.go:223] Handling node with IPs: map[172.26.185.80:{}]
	I0429 11:51:17.144223       1 main.go:250] Node ha-437800-m02 has CIDR [10.244.1.0/24] 
	I0429 11:51:17.144689       1 main.go:223] Handling node with IPs: map[172.26.177.113:{}]
	I0429 11:51:17.144783       1 main.go:250] Node ha-437800-m03 has CIDR [10.244.2.0/24] 
	I0429 11:51:27.156582       1 main.go:223] Handling node with IPs: map[172.26.176.3:{}]
	I0429 11:51:27.156910       1 main.go:227] handling current node
	I0429 11:51:27.157061       1 main.go:223] Handling node with IPs: map[172.26.185.80:{}]
	I0429 11:51:27.157248       1 main.go:250] Node ha-437800-m02 has CIDR [10.244.1.0/24] 
	I0429 11:51:27.157577       1 main.go:223] Handling node with IPs: map[172.26.177.113:{}]
	I0429 11:51:27.157817       1 main.go:250] Node ha-437800-m03 has CIDR [10.244.2.0/24] 
	I0429 11:51:37.174465       1 main.go:223] Handling node with IPs: map[172.26.176.3:{}]
	I0429 11:51:37.174564       1 main.go:227] handling current node
	I0429 11:51:37.174585       1 main.go:223] Handling node with IPs: map[172.26.185.80:{}]
	I0429 11:51:37.174594       1 main.go:250] Node ha-437800-m02 has CIDR [10.244.1.0/24] 
	I0429 11:51:37.175423       1 main.go:223] Handling node with IPs: map[172.26.177.113:{}]
	I0429 11:51:37.175967       1 main.go:250] Node ha-437800-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [ad03ce97e2db] <==
	Trace[650621099]:  ---"Txn call completed" 532ms (11:49:24.534)]
	Trace[650621099]: [533.831071ms] [533.831071ms] END
	E0429 11:49:29.351529       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0429 11:49:29.351579       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0429 11:49:29.352866       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 9.7µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0429 11:49:29.354175       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0429 11:49:29.354860       1 timeout.go:142] post-timeout activity - time-elapsed: 3.356406ms, PATCH "/api/v1/namespaces/default/events/ha-437800-m03.17cabdddcb4d3f64" result: <nil>
	I0429 11:49:29.596846       1 trace.go:236] Trace[600921503]: "Get" accept:application/json, */*,audit-id:88f5e6b3-4e81-4892-9205-5bef050a64c8,client:172.26.177.113,api-group:,api-version:v1,name:ha-437800-m03,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-437800-m03,user-agent:kubeadm/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:GET (29-Apr-2024 11:49:28.999) (total time: 596ms):
	Trace[600921503]: ---"About to write a response" 596ms (11:49:29.596)
	Trace[600921503]: [596.897597ms] [596.897597ms] END
	E0429 11:50:40.920216       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57757: use of closed network connection
	E0429 11:50:41.529506       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57759: use of closed network connection
	E0429 11:50:42.224208       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57761: use of closed network connection
	E0429 11:50:42.848463       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57763: use of closed network connection
	E0429 11:50:43.454829       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57765: use of closed network connection
	E0429 11:50:44.037932       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57767: use of closed network connection
	E0429 11:50:44.600595       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57769: use of closed network connection
	E0429 11:50:45.151200       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57771: use of closed network connection
	E0429 11:50:45.723500       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57773: use of closed network connection
	E0429 11:50:46.760453       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57776: use of closed network connection
	E0429 11:50:57.324444       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57778: use of closed network connection
	E0429 11:50:57.889332       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57781: use of closed network connection
	E0429 11:51:08.467521       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57783: use of closed network connection
	E0429 11:51:09.026089       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57786: use of closed network connection
	E0429 11:51:19.595622       1 conn.go:339] Error on socket receive: read tcp 172.26.191.254:8443->172.26.176.1:57788: use of closed network connection
	
	
	==> kube-controller-manager [752b474aaa31] <==
	I0429 11:42:12.690010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54µs"
	I0429 11:42:12.724584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.641651ms"
	I0429 11:42:12.727165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.3µs"
	I0429 11:45:37.701420       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-437800-m02\" does not exist"
	I0429 11:45:37.752225       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-437800-m02" podCIDRs=["10.244.1.0/24"]
	I0429 11:45:41.598914       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-437800-m02"
	I0429 11:49:28.548056       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-437800-m03\" does not exist"
	I0429 11:49:28.615733       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-437800-m03" podCIDRs=["10.244.2.0/24"]
	I0429 11:49:31.697835       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-437800-m03"
	I0429 11:50:34.073588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="120.745442ms"
	I0429 11:50:34.129710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.938212ms"
	I0429 11:50:34.226197       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.408993ms"
	I0429 11:50:34.420194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="193.864688ms"
	I0429 11:50:34.775451       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="355.212012ms"
	I0429 11:50:34.830992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.325711ms"
	I0429 11:50:34.831715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="452.601µs"
	I0429 11:50:34.832057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="234.701µs"
	I0429 11:50:34.948760       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.009964ms"
	I0429 11:50:34.952585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.5µs"
	I0429 11:50:37.186226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.972875ms"
	I0429 11:50:37.188871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.1µs"
	I0429 11:50:37.423949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.911934ms"
	I0429 11:50:37.424625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="507.404µs"
	I0429 11:50:37.747262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.078386ms"
	I0429 11:50:37.747806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="328.502µs"
	
	
	==> kube-proxy [c6c05f014af2] <==
	I0429 11:41:59.396774       1 server_linux.go:69] "Using iptables proxy"
	I0429 11:41:59.434801       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.26.176.3"]
	I0429 11:41:59.493135       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 11:41:59.493254       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 11:41:59.493279       1 server_linux.go:165] "Using iptables Proxier"
	I0429 11:41:59.500453       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 11:41:59.501578       1 server.go:872] "Version info" version="v1.30.0"
	I0429 11:41:59.501731       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 11:41:59.505744       1 config.go:192] "Starting service config controller"
	I0429 11:41:59.506465       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 11:41:59.506814       1 config.go:101] "Starting endpoint slice config controller"
	I0429 11:41:59.506976       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 11:41:59.511510       1 config.go:319] "Starting node config controller"
	I0429 11:41:59.511761       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 11:41:59.607430       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 11:41:59.607438       1 shared_informer.go:320] Caches are synced for service config
	I0429 11:41:59.612839       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2ff176e30ec6] <==
	W0429 11:41:40.397139       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 11:41:40.397300       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 11:41:40.462765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 11:41:40.463052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 11:41:40.464834       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 11:41:40.464885       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 11:41:40.506727       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 11:41:40.506891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 11:41:40.528623       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 11:41:40.528780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 11:41:40.555707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 11:41:40.555966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 11:41:40.653587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 11:41:40.653940       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 11:41:40.758264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 11:41:40.758437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 11:41:40.804050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 11:41:40.804681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0429 11:41:43.293317       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 11:50:34.076914       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dsnxf\": pod busybox-fc5497c4f-dsnxf is already assigned to node \"ha-437800-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-dsnxf" node="ha-437800-m02"
	E0429 11:50:34.078546       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dsnxf\": pod busybox-fc5497c4f-dsnxf is already assigned to node \"ha-437800-m02\"" pod="default/busybox-fc5497c4f-dsnxf"
	E0429 11:50:34.079618       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kxn7k\": pod busybox-fc5497c4f-kxn7k is already assigned to node \"ha-437800\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-kxn7k" node="ha-437800"
	E0429 11:50:34.079836       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7deeaa5b-a8bf-4ba8-b7d4-48507f9a1df0(default/busybox-fc5497c4f-kxn7k) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-kxn7k"
	E0429 11:50:34.079871       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kxn7k\": pod busybox-fc5497c4f-kxn7k is already assigned to node \"ha-437800\"" pod="default/busybox-fc5497c4f-kxn7k"
	I0429 11:50:34.079901       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-kxn7k" node="ha-437800"
	
	
	==> kubelet <==
	Apr 29 11:46:43 ha-437800 kubelet[2218]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 11:47:43 ha-437800 kubelet[2218]: E0429 11:47:43.407586    2218 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 11:47:43 ha-437800 kubelet[2218]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 11:47:43 ha-437800 kubelet[2218]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 11:47:43 ha-437800 kubelet[2218]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 11:47:43 ha-437800 kubelet[2218]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 11:48:43 ha-437800 kubelet[2218]: E0429 11:48:43.405710    2218 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 11:48:43 ha-437800 kubelet[2218]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 11:48:43 ha-437800 kubelet[2218]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 11:48:43 ha-437800 kubelet[2218]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 11:48:43 ha-437800 kubelet[2218]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 11:49:43 ha-437800 kubelet[2218]: E0429 11:49:43.408132    2218 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 11:49:43 ha-437800 kubelet[2218]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 11:49:43 ha-437800 kubelet[2218]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 11:49:43 ha-437800 kubelet[2218]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 11:49:43 ha-437800 kubelet[2218]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 11:50:34 ha-437800 kubelet[2218]: I0429 11:50:34.051663    2218 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zxvcx" podStartSLOduration=518.051631886 podStartE2EDuration="8m38.051631886s" podCreationTimestamp="2024-04-29 11:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 11:42:12.697707192 +0000 UTC m=+29.528909404" watchObservedRunningTime="2024-04-29 11:50:34.051631886 +0000 UTC m=+530.882834098"
	Apr 29 11:50:34 ha-437800 kubelet[2218]: I0429 11:50:34.052440    2218 topology_manager.go:215] "Topology Admit Handler" podUID="7deeaa5b-a8bf-4ba8-b7d4-48507f9a1df0" podNamespace="default" podName="busybox-fc5497c4f-kxn7k"
	Apr 29 11:50:34 ha-437800 kubelet[2218]: I0429 11:50:34.249033    2218 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvn29\" (UniqueName: \"kubernetes.io/projected/7deeaa5b-a8bf-4ba8-b7d4-48507f9a1df0-kube-api-access-tvn29\") pod \"busybox-fc5497c4f-kxn7k\" (UID: \"7deeaa5b-a8bf-4ba8-b7d4-48507f9a1df0\") " pod="default/busybox-fc5497c4f-kxn7k"
	Apr 29 11:50:35 ha-437800 kubelet[2218]: I0429 11:50:35.327942    2218 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="683f67e5fac4a33e11059922b81272badb370df8d76464f94848a3495a78bf04"
	Apr 29 11:50:43 ha-437800 kubelet[2218]: E0429 11:50:43.416201    2218 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 11:50:43 ha-437800 kubelet[2218]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 11:50:43 ha-437800 kubelet[2218]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 11:50:43 ha-437800 kubelet[2218]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 11:50:43 ha-437800 kubelet[2218]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:51:32.089294    4832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-437800 -n ha-437800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-437800 -n ha-437800: (12.4473049s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-437800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (69.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (39.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Non-zero exit: out/minikube-windows-amd64.exe profile list --output json: exit status 1 (3.6682538s)

                                                
                                                
** stderr ** 
	W0429 12:08:32.777201    6520 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
ha_test.go:392: failed to list profiles with json format. args "out/minikube-windows-amd64.exe profile list --output json": exit status 1
ha_test.go:398: failed to decode json from profile list: args "out/minikube-windows-amd64.exe profile list --output json": unexpected end of JSON input
ha_test.go:411: expected the json of 'profile list' to include "ha-437800" but got *""*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-437800 -n ha-437800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-437800 -n ha-437800: (12.3853419s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 logs -n 25: (8.8258802s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-437800 cp ha-437800-m03:/home/docker/cp-test.txt                                                                      | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:02 UTC | 29 Apr 24 12:02 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile939459166\001\cp-test_ha-437800-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n                                                                                                         | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:02 UTC | 29 Apr 24 12:03 UTC |
	|         | ha-437800-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-437800 cp ha-437800-m03:/home/docker/cp-test.txt                                                                      | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:03 UTC | 29 Apr 24 12:03 UTC |
	|         | ha-437800:/home/docker/cp-test_ha-437800-m03_ha-437800.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n                                                                                                         | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:03 UTC | 29 Apr 24 12:03 UTC |
	|         | ha-437800-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n ha-437800 sudo cat                                                                                      | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:03 UTC | 29 Apr 24 12:03 UTC |
	|         | /home/docker/cp-test_ha-437800-m03_ha-437800.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-437800 cp ha-437800-m03:/home/docker/cp-test.txt                                                                      | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:03 UTC | 29 Apr 24 12:03 UTC |
	|         | ha-437800-m02:/home/docker/cp-test_ha-437800-m03_ha-437800-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n                                                                                                         | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:03 UTC | 29 Apr 24 12:04 UTC |
	|         | ha-437800-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n ha-437800-m02 sudo cat                                                                                  | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:04 UTC | 29 Apr 24 12:04 UTC |
	|         | /home/docker/cp-test_ha-437800-m03_ha-437800-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-437800 cp ha-437800-m03:/home/docker/cp-test.txt                                                                      | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:04 UTC | 29 Apr 24 12:04 UTC |
	|         | ha-437800-m04:/home/docker/cp-test_ha-437800-m03_ha-437800-m04.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n                                                                                                         | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:04 UTC | 29 Apr 24 12:04 UTC |
	|         | ha-437800-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n ha-437800-m04 sudo cat                                                                                  | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:04 UTC | 29 Apr 24 12:04 UTC |
	|         | /home/docker/cp-test_ha-437800-m03_ha-437800-m04.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-437800 cp testdata\cp-test.txt                                                                                        | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:04 UTC | 29 Apr 24 12:04 UTC |
	|         | ha-437800-m04:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n                                                                                                         | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:04 UTC | 29 Apr 24 12:05 UTC |
	|         | ha-437800-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-437800 cp ha-437800-m04:/home/docker/cp-test.txt                                                                      | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:05 UTC | 29 Apr 24 12:05 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile939459166\001\cp-test_ha-437800-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n                                                                                                         | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:05 UTC | 29 Apr 24 12:05 UTC |
	|         | ha-437800-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-437800 cp ha-437800-m04:/home/docker/cp-test.txt                                                                      | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:05 UTC | 29 Apr 24 12:05 UTC |
	|         | ha-437800:/home/docker/cp-test_ha-437800-m04_ha-437800.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n                                                                                                         | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:05 UTC | 29 Apr 24 12:05 UTC |
	|         | ha-437800-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n ha-437800 sudo cat                                                                                      | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:05 UTC | 29 Apr 24 12:06 UTC |
	|         | /home/docker/cp-test_ha-437800-m04_ha-437800.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-437800 cp ha-437800-m04:/home/docker/cp-test.txt                                                                      | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:06 UTC | 29 Apr 24 12:06 UTC |
	|         | ha-437800-m02:/home/docker/cp-test_ha-437800-m04_ha-437800-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n                                                                                                         | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:06 UTC | 29 Apr 24 12:06 UTC |
	|         | ha-437800-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n ha-437800-m02 sudo cat                                                                                  | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:06 UTC | 29 Apr 24 12:06 UTC |
	|         | /home/docker/cp-test_ha-437800-m04_ha-437800-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-437800 cp ha-437800-m04:/home/docker/cp-test.txt                                                                      | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:06 UTC | 29 Apr 24 12:06 UTC |
	|         | ha-437800-m03:/home/docker/cp-test_ha-437800-m04_ha-437800-m03.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n                                                                                                         | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:06 UTC | 29 Apr 24 12:07 UTC |
	|         | ha-437800-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-437800 ssh -n ha-437800-m03 sudo cat                                                                                  | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:07 UTC | 29 Apr 24 12:07 UTC |
	|         | /home/docker/cp-test_ha-437800-m04_ha-437800-m03.txt                                                                     |           |                   |         |                     |                     |
	| node    | ha-437800 node stop m02 -v=7                                                                                             | ha-437800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:07 UTC | 29 Apr 24 12:07 UTC |
	|         | --alsologtostderr                                                                                                        |           |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:38:36
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:38:36.492320    5624 out.go:291] Setting OutFile to fd 1208 ...
	I0429 11:38:36.492320    5624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:38:36.492320    5624 out.go:304] Setting ErrFile to fd 988...
	I0429 11:38:36.492320    5624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:38:36.515311    5624 out.go:298] Setting JSON to false
	I0429 11:38:36.518304    5624 start.go:129] hostinfo: {"hostname":"minikube6","uptime":32189,"bootTime":1714358527,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 11:38:36.518304    5624 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 11:38:36.525131    5624 out.go:177] * [ha-437800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 11:38:36.528092    5624 notify.go:220] Checking for updates...
	I0429 11:38:36.530761    5624 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:38:36.533288    5624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:38:36.535997    5624 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 11:38:36.538664    5624 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 11:38:36.540913    5624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:38:36.543678    5624 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:38:41.922743    5624 out.go:177] * Using the hyperv driver based on user configuration
	I0429 11:38:41.926389    5624 start.go:297] selected driver: hyperv
	I0429 11:38:41.926389    5624 start.go:901] validating driver "hyperv" against <nil>
	I0429 11:38:41.926389    5624 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 11:38:41.977395    5624 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 11:38:41.978641    5624 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 11:38:41.978815    5624 cni.go:84] Creating CNI manager for ""
	I0429 11:38:41.978815    5624 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 11:38:41.978815    5624 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 11:38:41.978815    5624 start.go:340] cluster config:
	{Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:38:41.979347    5624 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:38:41.986830    5624 out.go:177] * Starting "ha-437800" primary control-plane node in "ha-437800" cluster
	I0429 11:38:41.988718    5624 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 11:38:41.989238    5624 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 11:38:41.989441    5624 cache.go:56] Caching tarball of preloaded images
	I0429 11:38:41.989585    5624 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 11:38:41.989585    5624 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 11:38:41.990189    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:38:41.990189    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json: {Name:mkde8b2acced2302a59bd62b727de17f46014934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:38:41.991691    5624 start.go:360] acquireMachinesLock for ha-437800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 11:38:41.991691    5624 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-437800"
	I0429 11:38:41.991691    5624 start.go:93] Provisioning new machine with config: &{Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:38:41.992220    5624 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 11:38:41.994443    5624 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 11:38:41.994443    5624 start.go:159] libmachine.API.Create for "ha-437800" (driver="hyperv")
	I0429 11:38:41.994443    5624 client.go:168] LocalClient.Create starting
	I0429 11:38:41.995010    5624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 11:38:41.995010    5624 main.go:141] libmachine: Decoding PEM data...
	I0429 11:38:41.995010    5624 main.go:141] libmachine: Parsing certificate...
	I0429 11:38:41.995010    5624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 11:38:41.995010    5624 main.go:141] libmachine: Decoding PEM data...
	I0429 11:38:41.995010    5624 main.go:141] libmachine: Parsing certificate...
	I0429 11:38:41.995010    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 11:38:44.101410    5624 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 11:38:44.101410    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:44.101410    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 11:38:45.875269    5624 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 11:38:45.876150    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:45.876150    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 11:38:47.363130    5624 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 11:38:47.363130    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:47.363730    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 11:38:50.887488    5624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 11:38:50.887488    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:50.890162    5624 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 11:38:51.418530    5624 main.go:141] libmachine: Creating SSH key...
	I0429 11:38:51.592762    5624 main.go:141] libmachine: Creating VM...
	I0429 11:38:51.592762    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 11:38:54.398774    5624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 11:38:54.398907    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:54.398907    5624 main.go:141] libmachine: Using switch "Default Switch"
	I0429 11:38:54.399115    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 11:38:56.200408    5624 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 11:38:56.201159    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:56.201159    5624 main.go:141] libmachine: Creating VHD
	I0429 11:38:56.201159    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 11:38:59.838589    5624 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6FFE2E55-97CA-42A8-86D7-9C44E847BFA0
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 11:38:59.838717    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:38:59.838717    5624 main.go:141] libmachine: Writing magic tar header
	I0429 11:38:59.838717    5624 main.go:141] libmachine: Writing SSH key tar header
	I0429 11:38:59.848739    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 11:39:02.955462    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:02.956253    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:02.956253    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\disk.vhd' -SizeBytes 20000MB
	I0429 11:39:05.455031    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:05.455031    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:05.455848    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-437800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 11:39:09.166874    5624 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-437800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 11:39:09.166935    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:09.166935    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-437800 -DynamicMemoryEnabled $false
	I0429 11:39:11.396816    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:11.396816    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:11.397172    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-437800 -Count 2
	I0429 11:39:13.561606    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:13.561606    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:13.561840    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-437800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\boot2docker.iso'
	I0429 11:39:16.069448    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:16.069701    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:16.069793    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-437800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\disk.vhd'
	I0429 11:39:18.659398    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:18.659398    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:18.659398    5624 main.go:141] libmachine: Starting VM...
	I0429 11:39:18.659801    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-437800
	I0429 11:39:21.704077    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:21.704545    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:21.704545    5624 main.go:141] libmachine: Waiting for host to start...
	I0429 11:39:21.704545    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:23.838160    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:23.839123    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:23.839188    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:39:26.244651    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:26.244651    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:27.244955    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:29.366727    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:29.366727    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:29.366727    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:39:31.867953    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:31.867953    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:32.877964    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:34.972321    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:34.972321    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:34.972849    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:39:37.425869    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:37.425869    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:38.433128    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:40.586000    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:40.586595    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:40.586595    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:39:43.083143    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:39:43.083306    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:44.095030    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:46.280115    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:46.280589    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:46.280806    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:39:48.848982    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:39:48.848982    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:48.848982    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:50.915924    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:50.915987    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:50.915987    5624 machine.go:94] provisionDockerMachine start ...
	I0429 11:39:50.915987    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:53.034378    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:53.035177    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:53.035177    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:39:55.518359    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:39:55.518359    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:55.525145    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:39:55.535483    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:39:55.535483    5624 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 11:39:55.674222    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 11:39:55.674353    5624 buildroot.go:166] provisioning hostname "ha-437800"
	I0429 11:39:55.674353    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:39:57.743353    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:39:57.743353    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:39:57.744178    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:00.242402    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:00.242402    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:00.249132    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:00.249807    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:00.249807    5624 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-437800 && echo "ha-437800" | sudo tee /etc/hostname
	I0429 11:40:00.402652    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-437800
	
	I0429 11:40:00.402652    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:02.457677    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:02.457677    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:02.457778    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:05.037289    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:05.037289    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:05.043755    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:05.044480    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:05.044480    5624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-437800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-437800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-437800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:40:05.199203    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:40:05.199203    5624 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 11:40:05.199323    5624 buildroot.go:174] setting up certificates
	I0429 11:40:05.199323    5624 provision.go:84] configureAuth start
	I0429 11:40:05.199449    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:07.276116    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:07.276116    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:07.277038    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:09.834337    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:09.835274    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:09.835414    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:11.887732    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:11.887732    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:11.888727    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:14.415902    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:14.415902    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:14.416510    5624 provision.go:143] copyHostCerts
	I0429 11:40:14.416679    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 11:40:14.417351    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 11:40:14.417433    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 11:40:14.417558    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 11:40:14.419225    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 11:40:14.419438    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 11:40:14.419549    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 11:40:14.419687    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 11:40:14.420888    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 11:40:14.421372    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 11:40:14.421488    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 11:40:14.421878    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 11:40:14.422828    5624 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-437800 san=[127.0.0.1 172.26.176.3 ha-437800 localhost minikube]
	I0429 11:40:14.754918    5624 provision.go:177] copyRemoteCerts
	I0429 11:40:14.770646    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:40:14.770646    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:16.835461    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:16.835678    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:16.835678    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:19.356157    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:19.356221    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:19.356221    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:40:19.466257    5624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6955733s)
	I0429 11:40:19.466257    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 11:40:19.466749    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 11:40:19.518151    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 11:40:19.518450    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 11:40:19.566660    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 11:40:19.566959    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 11:40:19.612368    5624 provision.go:87] duration metric: took 14.4129311s to configureAuth
	I0429 11:40:19.612368    5624 buildroot.go:189] setting minikube options for container-runtime
	I0429 11:40:19.612996    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:40:19.613076    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:21.640535    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:21.641488    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:21.641652    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:24.137961    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:24.138825    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:24.145291    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:24.145556    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:24.145556    5624 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 11:40:24.283831    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 11:40:24.283831    5624 buildroot.go:70] root file system type: tmpfs
	I0429 11:40:24.284096    5624 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 11:40:24.284096    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:26.320672    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:26.321411    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:26.321411    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:28.814670    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:28.814670    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:28.821837    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:28.821975    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:28.821975    5624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 11:40:28.986150    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 11:40:28.986269    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:31.033604    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:31.033604    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:31.033663    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:33.490232    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:33.491149    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:33.497204    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:33.497888    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:33.497888    5624 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 11:40:35.657947    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 11:40:35.658013    5624 machine.go:97] duration metric: took 44.7416719s to provisionDockerMachine
	I0429 11:40:35.658013    5624 client.go:171] duration metric: took 1m53.6626719s to LocalClient.Create
	I0429 11:40:35.658149    5624 start.go:167] duration metric: took 1m53.6627335s to libmachine.API.Create "ha-437800"
	I0429 11:40:35.658197    5624 start.go:293] postStartSetup for "ha-437800" (driver="hyperv")
	I0429 11:40:35.658220    5624 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:40:35.673328    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:40:35.673553    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:37.742844    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:37.743860    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:37.743947    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:40.265000    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:40.265870    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:40.266066    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:40:40.369670    5624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6959646s)
	I0429 11:40:40.384152    5624 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:40:40.392855    5624 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 11:40:40.392979    5624 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 11:40:40.393675    5624 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 11:40:40.395409    5624 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 11:40:40.395409    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 11:40:40.412804    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 11:40:40.432921    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 11:40:40.486575    5624 start.go:296] duration metric: took 4.8283172s for postStartSetup
	I0429 11:40:40.489565    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:42.586663    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:42.587676    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:42.587901    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:45.124860    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:45.124860    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:45.124934    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:40:45.127211    5624 start.go:128] duration metric: took 2m3.1340179s to createHost
	I0429 11:40:45.127747    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:47.193834    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:47.194268    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:47.194268    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:49.689959    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:49.690734    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:49.697660    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:49.698440    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:49.698440    5624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 11:40:49.835370    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714390849.839550148
	
	I0429 11:40:49.835370    5624 fix.go:216] guest clock: 1714390849.839550148
	I0429 11:40:49.835370    5624 fix.go:229] Guest: 2024-04-29 11:40:49.839550148 +0000 UTC Remote: 2024-04-29 11:40:45.1277475 +0000 UTC m=+128.818450601 (delta=4.711802648s)
	I0429 11:40:49.835370    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:51.954065    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:51.954065    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:51.954420    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:54.417699    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:54.418398    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:54.423972    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:40:54.424723    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.176.3 22 <nil> <nil>}
	I0429 11:40:54.424723    5624 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714390849
	I0429 11:40:54.574529    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 11:40:49 UTC 2024
	
	I0429 11:40:54.574529    5624 fix.go:236] clock set: Mon Apr 29 11:40:49 UTC 2024
	 (err=<nil>)
	I0429 11:40:54.574529    5624 start.go:83] releasing machines lock for "ha-437800", held for 2m12.5817912s
	I0429 11:40:54.575064    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:56.652046    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:40:56.652579    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:56.652579    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:40:59.190683    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:40:59.191428    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:40:59.196972    5624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:40:59.196972    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:40:59.208518    5624 ssh_runner.go:195] Run: cat /version.json
	I0429 11:40:59.208698    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:41:01.369746    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:41:01.369746    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:41:01.369846    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:41:01.369917    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:41:01.369917    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:41:01.369917    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:41:04.032433    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:41:04.033016    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:41:04.034103    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:41:04.052594    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:41:04.052594    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:41:04.052594    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:41:04.128009    5624 ssh_runner.go:235] Completed: cat /version.json: (4.919372s)
	I0429 11:41:04.142024    5624 ssh_runner.go:195] Run: systemctl --version
	I0429 11:41:04.213727    5624 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0167165s)
	I0429 11:41:04.226439    5624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 11:41:04.238496    5624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 11:41:04.252323    5624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:41:04.286115    5624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 11:41:04.286115    5624 start.go:494] detecting cgroup driver to use...
	I0429 11:41:04.286115    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:41:04.336994    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 11:41:04.373518    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 11:41:04.392150    5624 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 11:41:04.404506    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 11:41:04.438687    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:41:04.475781    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 11:41:04.512036    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:41:04.543440    5624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:41:04.582376    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 11:41:04.615588    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 11:41:04.648793    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 11:41:04.681904    5624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:41:04.715181    5624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:41:04.747255    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:41:04.962615    5624 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 11:41:04.994778    5624 start.go:494] detecting cgroup driver to use...
	I0429 11:41:05.008746    5624 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 11:41:05.052500    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:41:05.091521    5624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:41:05.144830    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:41:05.181982    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:41:05.219071    5624 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 11:41:05.281381    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:41:05.303749    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:41:05.355512    5624 ssh_runner.go:195] Run: which cri-dockerd
	I0429 11:41:05.374724    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 11:41:05.393089    5624 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 11:41:05.448059    5624 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 11:41:05.676687    5624 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 11:41:05.887364    5624 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 11:41:05.887634    5624 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 11:41:05.938625    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:41:06.158705    5624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 11:41:08.681433    5624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.522708s)
	I0429 11:41:08.696709    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 11:41:08.734729    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 11:41:08.773784    5624 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 11:41:09.013987    5624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 11:41:09.232810    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:41:09.457623    5624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 11:41:09.502220    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 11:41:09.539328    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:41:09.775032    5624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 11:41:09.890046    5624 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 11:41:09.904827    5624 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 11:41:09.914221    5624 start.go:562] Will wait 60s for crictl version
	I0429 11:41:09.928454    5624 ssh_runner.go:195] Run: which crictl
	I0429 11:41:09.947490    5624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 11:41:10.001368    5624 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 11:41:10.012377    5624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 11:41:10.054952    5624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 11:41:10.090454    5624 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 11:41:10.090454    5624 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 11:41:10.094513    5624 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 11:41:10.094513    5624 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 11:41:10.094513    5624 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 11:41:10.094513    5624 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:da:8e:53 Flags:up|broadcast|multicast|running}
	I0429 11:41:10.097500    5624 ip.go:210] interface addr: fe80::e4d4:6d70:21fb:68f3/64
	I0429 11:41:10.097500    5624 ip.go:210] interface addr: 172.26.176.1/20
	I0429 11:41:10.109499    5624 ssh_runner.go:195] Run: grep 172.26.176.1	host.minikube.internal$ /etc/hosts
	I0429 11:41:10.117354    5624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:41:10.153511    5624 kubeadm.go:877] updating cluster {Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 11:41:10.154079    5624 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 11:41:10.163447    5624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 11:41:10.186795    5624 docker.go:685] Got preloaded images: 
	I0429 11:41:10.186795    5624 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 11:41:10.198623    5624 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 11:41:10.239584    5624 ssh_runner.go:195] Run: which lz4
	I0429 11:41:10.246301    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 11:41:10.260390    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 11:41:10.266895    5624 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 11:41:10.267020    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 11:41:12.280808    5624 docker.go:649] duration metric: took 2.0342758s to copy over tarball
	I0429 11:41:12.293601    5624 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 11:41:21.182274    5624 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.8886036s)
	I0429 11:41:21.182348    5624 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 11:41:21.254351    5624 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 11:41:21.274179    5624 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 11:41:21.330833    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:41:21.550712    5624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 11:41:24.943343    5624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3440535s)
	I0429 11:41:24.953411    5624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 11:41:24.978211    5624 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 11:41:24.978211    5624 cache_images.go:84] Images are preloaded, skipping loading
	I0429 11:41:24.978211    5624 kubeadm.go:928] updating node { 172.26.176.3 8443 v1.30.0 docker true true} ...
	I0429 11:41:24.978211    5624 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-437800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.176.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 11:41:24.987539    5624 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 11:41:25.022297    5624 cni.go:84] Creating CNI manager for ""
	I0429 11:41:25.022297    5624 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 11:41:25.022297    5624 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 11:41:25.022450    5624 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.176.3 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-437800 NodeName:ha-437800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.176.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.176.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 11:41:25.022518    5624 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.176.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-437800"
	  kubeletExtraArgs:
	    node-ip: 172.26.176.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.176.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 11:41:25.022717    5624 kube-vip.go:111] generating kube-vip config ...
	I0429 11:41:25.035746    5624 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 11:41:25.064321    5624 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 11:41:25.064321    5624 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.26.191.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0429 11:41:25.078782    5624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 11:41:25.096459    5624 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 11:41:25.108782    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 11:41:25.128531    5624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 11:41:25.159904    5624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 11:41:25.191951    5624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0429 11:41:25.224116    5624 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0429 11:41:25.269964    5624 ssh_runner.go:195] Run: grep 172.26.191.254	control-plane.minikube.internal$ /etc/hosts
	I0429 11:41:25.276712    5624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.191.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:41:25.314177    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:41:25.541266    5624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:41:25.573048    5624 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800 for IP: 172.26.176.3
	I0429 11:41:25.573048    5624 certs.go:194] generating shared ca certs ...
	I0429 11:41:25.573048    5624 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:25.573048    5624 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 11:41:25.574034    5624 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 11:41:25.574034    5624 certs.go:256] generating profile certs ...
	I0429 11:41:25.575143    5624 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.key
	I0429 11:41:25.575263    5624 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.crt with IP's: []
	I0429 11:41:25.933264    5624 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.crt ...
	I0429 11:41:25.933264    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.crt: {Name:mke3f60849b28a4fba6b85cd3f79b6cb8b4dd390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:25.934741    5624 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.key ...
	I0429 11:41:25.934741    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.key: {Name:mk16731689887025c819e8844cbaf6132d0c6269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:25.935261    5624 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.daf43dc4
	I0429 11:41:25.936337    5624 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.daf43dc4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.26.176.3 172.26.191.254]
	I0429 11:41:26.150290    5624 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.daf43dc4 ...
	I0429 11:41:26.150290    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.daf43dc4: {Name:mk0bd09318c9f647250117ce8a1458a877442397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:26.151481    5624 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.daf43dc4 ...
	I0429 11:41:26.151481    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.daf43dc4: {Name:mk8f0755d767ce5ab827f02650006a37ddc122fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:26.152659    5624 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.daf43dc4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt
	I0429 11:41:26.167218    5624 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.daf43dc4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key
	I0429 11:41:26.168561    5624 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key
	I0429 11:41:26.169279    5624 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt with IP's: []
	I0429 11:41:26.418072    5624 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt ...
	I0429 11:41:26.418072    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt: {Name:mk96bc7760b5d88b39ffdf07f71258ba50cc8f8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:26.420002    5624 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key ...
	I0429 11:41:26.420002    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key: {Name:mka8851bea0e8e606285ced0ac7e8dc119877f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:26.420002    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 11:41:26.421317    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 11:41:26.421480    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 11:41:26.421687    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 11:41:26.421850    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 11:41:26.422147    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 11:41:26.422304    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 11:41:26.429525    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 11:41:26.430703    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem (1338 bytes)
	W0429 11:41:26.431724    5624 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496_empty.pem, impossibly tiny 0 bytes
	I0429 11:41:26.431724    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0429 11:41:26.431724    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 11:41:26.431724    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 11:41:26.433256    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 11:41:26.433992    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem (1708 bytes)
	I0429 11:41:26.434212    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem -> /usr/share/ca-certificates/8496.pem
	I0429 11:41:26.434339    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /usr/share/ca-certificates/84962.pem
	I0429 11:41:26.434339    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:41:26.435819    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 11:41:26.483144    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 11:41:26.528404    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 11:41:26.574278    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 11:41:26.625982    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 11:41:26.677103    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 11:41:26.730505    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 11:41:26.784244    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 11:41:26.837778    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem --> /usr/share/ca-certificates/8496.pem (1338 bytes)
	I0429 11:41:26.887539    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /usr/share/ca-certificates/84962.pem (1708 bytes)
	I0429 11:41:26.935317    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 11:41:26.994853    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 11:41:27.041600    5624 ssh_runner.go:195] Run: openssl version
	I0429 11:41:27.063935    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8496.pem && ln -fs /usr/share/ca-certificates/8496.pem /etc/ssl/certs/8496.pem"
	I0429 11:41:27.099134    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8496.pem
	I0429 11:41:27.108785    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 11:41:27.122468    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8496.pem
	I0429 11:41:27.145143    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8496.pem /etc/ssl/certs/51391683.0"
	I0429 11:41:27.184495    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84962.pem && ln -fs /usr/share/ca-certificates/84962.pem /etc/ssl/certs/84962.pem"
	I0429 11:41:27.218475    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84962.pem
	I0429 11:41:27.225873    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 11:41:27.238956    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84962.pem
	I0429 11:41:27.260960    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84962.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 11:41:27.297915    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 11:41:27.334610    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:41:27.342064    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:41:27.357282    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:41:27.379168    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 11:41:27.413443    5624 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 11:41:27.422107    5624 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 11:41:27.422107    5624 kubeadm.go:391] StartCluster: {Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:41:27.432503    5624 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 11:41:27.476980    5624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 11:41:27.511165    5624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 11:41:27.544518    5624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 11:41:27.564322    5624 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 11:41:27.564322    5624 kubeadm.go:156] found existing configuration files:
	
	I0429 11:41:27.580083    5624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 11:41:27.598316    5624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 11:41:27.612051    5624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 11:41:27.644522    5624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 11:41:27.663384    5624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 11:41:27.674001    5624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 11:41:27.707847    5624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 11:41:27.724859    5624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 11:41:27.737842    5624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 11:41:27.773371    5624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 11:41:27.791132    5624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 11:41:27.804487    5624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 11:41:27.824585    5624 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 11:41:28.316373    5624 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 11:41:43.824833    5624 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 11:41:43.825060    5624 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 11:41:43.825330    5624 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 11:41:43.825330    5624 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 11:41:43.825330    5624 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 11:41:43.825861    5624 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 11:41:43.828285    5624 out.go:204]   - Generating certificates and keys ...
	I0429 11:41:43.828485    5624 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 11:41:43.828597    5624 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 11:41:43.828696    5624 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 11:41:43.828782    5624 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 11:41:43.828782    5624 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 11:41:43.828782    5624 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 11:41:43.828782    5624 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 11:41:43.829324    5624 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-437800 localhost] and IPs [172.26.176.3 127.0.0.1 ::1]
	I0429 11:41:43.829569    5624 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 11:41:43.829795    5624 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-437800 localhost] and IPs [172.26.176.3 127.0.0.1 ::1]
	I0429 11:41:43.829795    5624 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 11:41:43.829795    5624 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 11:41:43.829795    5624 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 11:41:43.830468    5624 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 11:41:43.830468    5624 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 11:41:43.830468    5624 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 11:41:43.830468    5624 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 11:41:43.831056    5624 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 11:41:43.831346    5624 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 11:41:43.831346    5624 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 11:41:43.831346    5624 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 11:41:43.834901    5624 out.go:204]   - Booting up control plane ...
	I0429 11:41:43.834901    5624 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 11:41:43.835477    5624 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 11:41:43.835477    5624 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 11:41:43.835477    5624 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 11:41:43.836112    5624 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 11:41:43.836112    5624 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 11:41:43.836112    5624 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 11:41:43.836690    5624 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 11:41:43.836847    5624 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00263247s
	I0429 11:41:43.836890    5624 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 11:41:43.836890    5624 kubeadm.go:309] [api-check] The API server is healthy after 8.766804148s
	I0429 11:41:43.836890    5624 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 11:41:43.837638    5624 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 11:41:43.837801    5624 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 11:41:43.838062    5624 kubeadm.go:309] [mark-control-plane] Marking the node ha-437800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 11:41:43.838362    5624 kubeadm.go:309] [bootstrap-token] Using token: h7cu04.z6k8bpxubty5dxx7
	I0429 11:41:43.841130    5624 out.go:204]   - Configuring RBAC rules ...
	I0429 11:41:43.842095    5624 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 11:41:43.842095    5624 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 11:41:43.842095    5624 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 11:41:43.842095    5624 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 11:41:43.842095    5624 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 11:41:43.843264    5624 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 11:41:43.843432    5624 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 11:41:43.843432    5624 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 11:41:43.843796    5624 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 11:41:43.843863    5624 kubeadm.go:309] 
	I0429 11:41:43.843908    5624 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 11:41:43.843908    5624 kubeadm.go:309] 
	I0429 11:41:43.844106    5624 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 11:41:43.844106    5624 kubeadm.go:309] 
	I0429 11:41:43.844106    5624 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 11:41:43.844106    5624 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 11:41:43.844106    5624 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 11:41:43.844106    5624 kubeadm.go:309] 
	I0429 11:41:43.844106    5624 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 11:41:43.844106    5624 kubeadm.go:309] 
	I0429 11:41:43.844106    5624 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 11:41:43.844106    5624 kubeadm.go:309] 
	I0429 11:41:43.844943    5624 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 11:41:43.845052    5624 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 11:41:43.845052    5624 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 11:41:43.845052    5624 kubeadm.go:309] 
	I0429 11:41:43.845052    5624 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 11:41:43.845625    5624 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 11:41:43.845754    5624 kubeadm.go:309] 
	I0429 11:41:43.845883    5624 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token h7cu04.z6k8bpxubty5dxx7 \
	I0429 11:41:43.846165    5624 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a \
	I0429 11:41:43.846165    5624 kubeadm.go:309] 	--control-plane 
	I0429 11:41:43.846165    5624 kubeadm.go:309] 
	I0429 11:41:43.846472    5624 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 11:41:43.846507    5624 kubeadm.go:309] 
	I0429 11:41:43.846624    5624 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token h7cu04.z6k8bpxubty5dxx7 \
	I0429 11:41:43.846624    5624 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a 
	I0429 11:41:43.846624    5624 cni.go:84] Creating CNI manager for ""
	I0429 11:41:43.846624    5624 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 11:41:43.850556    5624 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 11:41:43.871289    5624 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 11:41:43.879887    5624 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 11:41:43.879887    5624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 11:41:43.931050    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 11:41:44.666335    5624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 11:41:44.681327    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:44.681327    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-437800 minikube.k8s.io/updated_at=2024_04_29T11_41_44_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d minikube.k8s.io/name=ha-437800 minikube.k8s.io/primary=true
	I0429 11:41:44.691497    5624 ops.go:34] apiserver oom_adj: -16
	I0429 11:41:44.909700    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:45.423884    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:45.923645    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:46.426621    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:46.915461    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:47.421871    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:47.924043    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:48.417440    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:48.917835    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:49.420539    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:49.911588    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:50.425193    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:50.924922    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:51.411931    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:51.911074    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:52.416294    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:52.913764    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:53.416285    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:53.923201    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:54.421473    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:54.921538    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:55.422506    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:55.911217    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:56.417742    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:41:56.583171    5624 kubeadm.go:1107] duration metric: took 11.9167432s to wait for elevateKubeSystemPrivileges
	W0429 11:41:56.583263    5624 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 11:41:56.583263    5624 kubeadm.go:393] duration metric: took 29.1609291s to StartCluster
	I0429 11:41:56.583263    5624 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:56.583263    5624 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:41:56.584636    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:41:56.586259    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 11:41:56.586259    5624 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:41:56.586259    5624 start.go:240] waiting for startup goroutines ...
	I0429 11:41:56.586259    5624 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 11:41:56.586259    5624 addons.go:69] Setting storage-provisioner=true in profile "ha-437800"
	I0429 11:41:56.586790    5624 addons.go:234] Setting addon storage-provisioner=true in "ha-437800"
	I0429 11:41:56.586844    5624 addons.go:69] Setting default-storageclass=true in profile "ha-437800"
	I0429 11:41:56.587008    5624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-437800"
	I0429 11:41:56.587034    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:41:56.587034    5624 host.go:66] Checking if "ha-437800" exists ...
	I0429 11:41:56.587326    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:41:56.588097    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:41:56.744040    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.26.176.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 11:41:57.066974    5624 start.go:946] {"host.minikube.internal": 172.26.176.1} host record injected into CoreDNS's ConfigMap
	I0429 11:41:58.806452    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:41:58.806452    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:41:58.809286    5624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 11:41:58.806452    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:41:58.809286    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:41:58.810293    5624 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:41:58.812294    5624 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 11:41:58.812294    5624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 11:41:58.813286    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:41:58.813286    5624 kapi.go:59] client config for ha-437800: &rest.Config{Host:"https://172.26.191.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-437800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-437800\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 11:41:58.814285    5624 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 11:41:58.815310    5624 addons.go:234] Setting addon default-storageclass=true in "ha-437800"
	I0429 11:41:58.815310    5624 host.go:66] Checking if "ha-437800" exists ...
	I0429 11:41:58.816291    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:42:01.070623    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:42:01.070773    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:01.070872    5624 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 11:42:01.070872    5624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 11:42:01.070872    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:42:01.175648    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:42:01.175874    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:01.176272    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:42:03.268670    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:42:03.268670    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:03.268670    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:42:03.897628    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:42:03.897628    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:03.898166    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:42:04.074465    5624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 11:42:05.868718    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:42:05.868718    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:05.870276    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:42:06.015850    5624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 11:42:06.199503    5624 round_trippers.go:463] GET https://172.26.191.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 11:42:06.199503    5624 round_trippers.go:469] Request Headers:
	I0429 11:42:06.199503    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:42:06.199503    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:42:06.224158    5624 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0429 11:42:06.225193    5624 round_trippers.go:463] PUT https://172.26.191.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 11:42:06.225193    5624 round_trippers.go:469] Request Headers:
	I0429 11:42:06.225193    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:42:06.225193    5624 round_trippers.go:473]     Content-Type: application/json
	I0429 11:42:06.225193    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:42:06.232172    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:42:06.236167    5624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 11:42:06.240156    5624 addons.go:505] duration metric: took 9.6538218s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 11:42:06.240156    5624 start.go:245] waiting for cluster config update ...
	I0429 11:42:06.240156    5624 start.go:254] writing updated cluster config ...
	I0429 11:42:06.245157    5624 out.go:177] 
	I0429 11:42:06.254493    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:42:06.254629    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:42:06.260511    5624 out.go:177] * Starting "ha-437800-m02" control-plane node in "ha-437800" cluster
	I0429 11:42:06.264045    5624 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 11:42:06.264045    5624 cache.go:56] Caching tarball of preloaded images
	I0429 11:42:06.264670    5624 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 11:42:06.264670    5624 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 11:42:06.265067    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:42:06.268036    5624 start.go:360] acquireMachinesLock for ha-437800-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 11:42:06.268036    5624 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-437800-m02"
	I0429 11:42:06.268036    5624 start.go:93] Provisioning new machine with config: &{Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:42:06.269042    5624 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 11:42:06.272037    5624 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 11:42:06.272037    5624 start.go:159] libmachine.API.Create for "ha-437800" (driver="hyperv")
	I0429 11:42:06.272037    5624 client.go:168] LocalClient.Create starting
	I0429 11:42:06.272037    5624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 11:42:06.273044    5624 main.go:141] libmachine: Decoding PEM data...
	I0429 11:42:06.273044    5624 main.go:141] libmachine: Parsing certificate...
	I0429 11:42:06.273044    5624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 11:42:06.273044    5624 main.go:141] libmachine: Decoding PEM data...
	I0429 11:42:06.273044    5624 main.go:141] libmachine: Parsing certificate...
	I0429 11:42:06.273044    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 11:42:08.217300    5624 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 11:42:08.217300    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:08.218296    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 11:42:09.980838    5624 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 11:42:09.980838    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:09.981497    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 11:42:11.510951    5624 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 11:42:11.511620    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:11.511620    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 11:42:15.110841    5624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 11:42:15.110841    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:15.114008    5624 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 11:42:15.629405    5624 main.go:141] libmachine: Creating SSH key...
	I0429 11:42:15.805205    5624 main.go:141] libmachine: Creating VM...
	I0429 11:42:15.806211    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 11:42:18.674560    5624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 11:42:18.675156    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:18.675524    5624 main.go:141] libmachine: Using switch "Default Switch"
	I0429 11:42:18.675805    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 11:42:20.488894    5624 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 11:42:20.489932    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:20.490021    5624 main.go:141] libmachine: Creating VHD
	I0429 11:42:20.490021    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 11:42:24.145856    5624 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 90CD0A4C-0EA6-4A1A-B2E9-1522C726FEB7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 11:42:24.145856    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:24.145856    5624 main.go:141] libmachine: Writing magic tar header
	I0429 11:42:24.145856    5624 main.go:141] libmachine: Writing SSH key tar header
	I0429 11:42:24.157295    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 11:42:27.312988    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:27.312988    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:27.312988    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\disk.vhd' -SizeBytes 20000MB
	I0429 11:42:29.835462    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:29.835462    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:29.835462    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-437800-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 11:42:33.587931    5624 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-437800-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 11:42:33.588180    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:33.588243    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-437800-m02 -DynamicMemoryEnabled $false
	I0429 11:42:35.863281    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:35.863281    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:35.864143    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-437800-m02 -Count 2
	I0429 11:42:38.029235    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:38.029235    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:38.030354    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-437800-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\boot2docker.iso'
	I0429 11:42:40.576166    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:40.576166    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:40.576166    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-437800-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\disk.vhd'
	I0429 11:42:43.237627    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:43.237721    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:43.237721    5624 main.go:141] libmachine: Starting VM...
	I0429 11:42:43.237721    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-437800-m02
	I0429 11:42:46.320403    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:46.320721    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:46.320721    5624 main.go:141] libmachine: Waiting for host to start...
	I0429 11:42:46.320721    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:42:48.616367    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:42:48.616367    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:48.616667    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:42:51.148176    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:51.148176    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:52.156103    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:42:54.330276    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:42:54.330438    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:54.330438    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:42:56.840715    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:42:56.840715    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:57.842094    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:42:59.987946    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:42:59.987946    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:42:59.987946    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:02.450214    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:43:02.450214    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:03.454148    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:05.609211    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:05.609655    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:05.609655    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:08.106934    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:43:08.107973    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:09.109237    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:11.297183    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:11.297183    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:11.297183    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:13.913530    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:13.913530    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:13.913530    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:16.061065    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:16.061065    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:16.061065    5624 machine.go:94] provisionDockerMachine start ...
	I0429 11:43:16.061188    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:18.253299    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:18.253299    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:18.254215    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:20.771624    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:20.771624    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:20.778900    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:43:20.779400    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:43:20.779400    5624 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 11:43:20.921867    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 11:43:20.921999    5624 buildroot.go:166] provisioning hostname "ha-437800-m02"
	I0429 11:43:20.921999    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:23.029510    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:23.029510    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:23.030184    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:25.562979    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:25.562979    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:25.570463    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:43:25.570643    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:43:25.570643    5624 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-437800-m02 && echo "ha-437800-m02" | sudo tee /etc/hostname
	I0429 11:43:25.742794    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-437800-m02
	
	I0429 11:43:25.742794    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:27.866951    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:27.866951    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:27.867247    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:30.389636    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:30.390213    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:30.394911    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:43:30.395588    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:43:30.395588    5624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-437800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-437800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-437800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:43:30.546695    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:43:30.546695    5624 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 11:43:30.546695    5624 buildroot.go:174] setting up certificates
	I0429 11:43:30.546695    5624 provision.go:84] configureAuth start
	I0429 11:43:30.546695    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:32.614621    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:32.615259    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:32.615329    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:35.143643    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:35.143643    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:35.143902    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:37.253926    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:37.254038    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:37.254038    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:39.800603    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:39.800603    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:39.800603    5624 provision.go:143] copyHostCerts
	I0429 11:43:39.800859    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 11:43:39.801095    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 11:43:39.801095    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 11:43:39.801095    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 11:43:39.802602    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 11:43:39.802836    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 11:43:39.802836    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 11:43:39.802836    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 11:43:39.804258    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 11:43:39.804561    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 11:43:39.804645    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 11:43:39.805079    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 11:43:39.806214    5624 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-437800-m02 san=[127.0.0.1 172.26.185.80 ha-437800-m02 localhost minikube]
	I0429 11:43:40.135861    5624 provision.go:177] copyRemoteCerts
	I0429 11:43:40.149763    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:43:40.150299    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:42.273457    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:42.273457    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:42.273457    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:44.825619    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:44.826026    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:44.826420    5624 sshutil.go:53] new ssh client: &{IP:172.26.185.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\id_rsa Username:docker}
	I0429 11:43:44.939885    5624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7900845s)
	I0429 11:43:44.939885    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 11:43:44.939885    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 11:43:44.997527    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 11:43:44.997970    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 11:43:45.045774    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 11:43:45.045774    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 11:43:45.094941    5624 provision.go:87] duration metric: took 14.5481323s to configureAuth
	I0429 11:43:45.094997    5624 buildroot.go:189] setting minikube options for container-runtime
	I0429 11:43:45.095168    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:43:45.095168    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:47.163278    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:47.163278    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:47.163278    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:49.701385    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:49.701683    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:49.707476    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:43:49.708202    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:43:49.708202    5624 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 11:43:49.844576    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 11:43:49.844576    5624 buildroot.go:70] root file system type: tmpfs
	I0429 11:43:49.844576    5624 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 11:43:49.844576    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:51.951752    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:51.951789    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:51.951910    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:54.462935    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:54.463754    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:54.469918    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:43:54.469918    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:43:54.470510    5624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.26.176.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 11:43:54.641714    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.26.176.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 11:43:54.641714    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:43:56.735635    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:43:56.736177    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:56.736234    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:43:59.257410    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:43:59.257410    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:43:59.263271    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:43:59.263271    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:43:59.263800    5624 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 11:44:01.507236    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 11:44:01.509493    5624 machine.go:97] duration metric: took 45.448074s to provisionDockerMachine
	I0429 11:44:01.509586    5624 client.go:171] duration metric: took 1m55.2365581s to LocalClient.Create
	I0429 11:44:01.509586    5624 start.go:167] duration metric: took 1m55.2366505s to libmachine.API.Create "ha-437800"
	I0429 11:44:01.509586    5624 start.go:293] postStartSetup for "ha-437800-m02" (driver="hyperv")
	I0429 11:44:01.509715    5624 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:44:01.524277    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:44:01.524277    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:03.639196    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:03.639196    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:03.639396    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:06.176738    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:06.177785    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:06.178325    5624 sshutil.go:53] new ssh client: &{IP:172.26.185.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\id_rsa Username:docker}
	I0429 11:44:06.294152    5624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7698374s)
	I0429 11:44:06.307153    5624 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:44:06.315238    5624 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 11:44:06.315352    5624 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 11:44:06.315688    5624 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 11:44:06.316797    5624 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 11:44:06.316797    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 11:44:06.329529    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 11:44:06.350362    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 11:44:06.399519    5624 start.go:296] duration metric: took 4.8898947s for postStartSetup
	I0429 11:44:06.402549    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:08.486719    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:08.486719    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:08.487680    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:11.032622    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:11.032622    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:11.032622    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:44:11.035892    5624 start.go:128] duration metric: took 2m4.7658761s to createHost
	I0429 11:44:11.036525    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:13.163944    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:13.164021    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:13.164133    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:15.739025    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:15.739422    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:15.746073    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:44:15.746814    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:44:15.746814    5624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 11:44:15.878082    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714391055.869843780
	
	I0429 11:44:15.878082    5624 fix.go:216] guest clock: 1714391055.869843780
	I0429 11:44:15.878082    5624 fix.go:229] Guest: 2024-04-29 11:44:15.86984378 +0000 UTC Remote: 2024-04-29 11:44:11.036488 +0000 UTC m=+334.725584301 (delta=4.83335578s)
	I0429 11:44:15.878202    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:17.925767    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:17.925815    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:17.925815    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:20.499212    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:20.499508    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:20.506087    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:44:20.506290    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.80 22 <nil> <nil>}
	I0429 11:44:20.506290    5624 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714391055
	I0429 11:44:20.664519    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 11:44:15 UTC 2024
	
	I0429 11:44:20.664519    5624 fix.go:236] clock set: Mon Apr 29 11:44:15 UTC 2024
	 (err=<nil>)
	I0429 11:44:20.664519    5624 start.go:83] releasing machines lock for "ha-437800-m02", held for 2m14.3954344s
	I0429 11:44:20.664824    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:22.757042    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:22.757704    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:22.757844    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:25.261345    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:25.261635    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:25.265372    5624 out.go:177] * Found network options:
	I0429 11:44:25.268684    5624 out.go:177]   - NO_PROXY=172.26.176.3
	W0429 11:44:25.270733    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 11:44:25.273092    5624 out.go:177]   - NO_PROXY=172.26.176.3
	W0429 11:44:25.275248    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 11:44:25.276683    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 11:44:25.279331    5624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:44:25.279331    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:25.297766    5624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 11:44:25.297766    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 11:44:27.428995    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:27.428995    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:27.428995    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:27.441094    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:27.441094    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:27.441094    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:30.063522    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:30.063522    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:30.064673    5624 sshutil.go:53] new ssh client: &{IP:172.26.185.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\id_rsa Username:docker}
	I0429 11:44:30.087132    5624 main.go:141] libmachine: [stdout =====>] : 172.26.185.80
	
	I0429 11:44:30.087132    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:30.087556    5624 sshutil.go:53] new ssh client: &{IP:172.26.185.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m02\id_rsa Username:docker}
	I0429 11:44:30.252212    5624 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9544077s)
	I0429 11:44:30.252291    5624 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9729216s)
	W0429 11:44:30.252389    5624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 11:44:30.265482    5624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:44:30.301604    5624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 11:44:30.301604    5624 start.go:494] detecting cgroup driver to use...
	I0429 11:44:30.301604    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:44:30.355595    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 11:44:30.387680    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 11:44:30.408592    5624 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 11:44:30.420658    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 11:44:30.454073    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:44:30.488741    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 11:44:30.523956    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:44:30.552964    5624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:44:30.589420    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 11:44:30.626656    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 11:44:30.660759    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 11:44:30.693221    5624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:44:30.725222    5624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:44:30.758219    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:44:30.978828    5624 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 11:44:31.012336    5624 start.go:494] detecting cgroup driver to use...
	I0429 11:44:31.024921    5624 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 11:44:31.063927    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:44:31.098911    5624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:44:31.151915    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:44:31.188433    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:44:31.225593    5624 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 11:44:31.290549    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:44:31.314138    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:44:31.364806    5624 ssh_runner.go:195] Run: which cri-dockerd
	I0429 11:44:31.384313    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 11:44:31.406433    5624 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 11:44:31.457094    5624 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 11:44:31.681193    5624 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 11:44:31.879845    5624 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 11:44:31.879996    5624 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 11:44:31.926619    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:44:32.143059    5624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 11:44:34.687305    5624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5442264s)
	I0429 11:44:34.701337    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 11:44:34.740720    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 11:44:34.780248    5624 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 11:44:34.993950    5624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 11:44:35.210559    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:44:35.428939    5624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 11:44:35.475721    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 11:44:35.516612    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:44:35.747702    5624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 11:44:35.875483    5624 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 11:44:35.889971    5624 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 11:44:35.900252    5624 start.go:562] Will wait 60s for crictl version
	I0429 11:44:35.913594    5624 ssh_runner.go:195] Run: which crictl
	I0429 11:44:35.932666    5624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 11:44:35.995475    5624 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 11:44:36.006070    5624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 11:44:36.056079    5624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 11:44:36.095205    5624 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 11:44:36.098429    5624 out.go:177]   - env NO_PROXY=172.26.176.3
	I0429 11:44:36.103417    5624 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 11:44:36.107417    5624 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 11:44:36.107417    5624 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 11:44:36.107417    5624 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 11:44:36.107417    5624 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:da:8e:53 Flags:up|broadcast|multicast|running}
	I0429 11:44:36.110416    5624 ip.go:210] interface addr: fe80::e4d4:6d70:21fb:68f3/64
	I0429 11:44:36.110416    5624 ip.go:210] interface addr: 172.26.176.1/20
	I0429 11:44:36.123771    5624 ssh_runner.go:195] Run: grep 172.26.176.1	host.minikube.internal$ /etc/hosts
	I0429 11:44:36.131766    5624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:44:36.156153    5624 mustload.go:65] Loading cluster: ha-437800
	I0429 11:44:36.156997    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:44:36.157643    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:44:38.247518    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:38.247518    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:38.247518    5624 host.go:66] Checking if "ha-437800" exists ...
	I0429 11:44:38.248066    5624 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800 for IP: 172.26.185.80
	I0429 11:44:38.248066    5624 certs.go:194] generating shared ca certs ...
	I0429 11:44:38.248066    5624 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:44:38.249049    5624 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 11:44:38.249476    5624 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 11:44:38.249695    5624 certs.go:256] generating profile certs ...
	I0429 11:44:38.250369    5624 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.key
	I0429 11:44:38.250485    5624 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.3bc4921e
	I0429 11:44:38.250623    5624 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.3bc4921e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.26.176.3 172.26.185.80 172.26.191.254]
	I0429 11:44:38.620644    5624 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.3bc4921e ...
	I0429 11:44:38.620644    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.3bc4921e: {Name:mk580a605ceda2e337454db64c47dc0599057a0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:44:38.621643    5624 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.3bc4921e ...
	I0429 11:44:38.621643    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.3bc4921e: {Name:mke1c5e386d821804eb4df2dee5e5f8ef6eebb15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:44:38.622935    5624 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.3bc4921e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt
	I0429 11:44:38.635991    5624 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.3bc4921e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key
	I0429 11:44:38.636928    5624 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key
	I0429 11:44:38.636928    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 11:44:38.637487    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 11:44:38.637772    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 11:44:38.638017    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 11:44:38.638085    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 11:44:38.638376    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 11:44:38.638585    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 11:44:38.638585    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 11:44:38.639113    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem (1338 bytes)
	W0429 11:44:38.639113    5624 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496_empty.pem, impossibly tiny 0 bytes
	I0429 11:44:38.639113    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0429 11:44:38.639113    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 11:44:38.639113    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 11:44:38.640516    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 11:44:38.640800    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem (1708 bytes)
	I0429 11:44:38.641354    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem -> /usr/share/ca-certificates/8496.pem
	I0429 11:44:38.641474    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /usr/share/ca-certificates/84962.pem
	I0429 11:44:38.641474    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:44:38.642038    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:44:40.736761    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:40.736761    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:40.737366    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:43.329380    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:44:43.329380    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:43.330824    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:44:43.444162    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 11:44:43.460651    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 11:44:43.501769    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 11:44:43.509321    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0429 11:44:43.546215    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 11:44:43.554672    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 11:44:43.593367    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 11:44:43.602244    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0429 11:44:43.641506    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 11:44:43.648749    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 11:44:43.697348    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 11:44:43.704306    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0429 11:44:43.727023    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 11:44:43.780702    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 11:44:43.829749    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 11:44:43.877586    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 11:44:43.926185    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0429 11:44:43.975126    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 11:44:44.022408    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 11:44:44.074790    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 11:44:44.123858    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem --> /usr/share/ca-certificates/8496.pem (1338 bytes)
	I0429 11:44:44.173847    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /usr/share/ca-certificates/84962.pem (1708 bytes)
	I0429 11:44:44.221051    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 11:44:44.269751    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 11:44:44.302102    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0429 11:44:44.335363    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 11:44:44.371626    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0429 11:44:44.407921    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 11:44:44.443521    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0429 11:44:44.479115    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 11:44:44.530351    5624 ssh_runner.go:195] Run: openssl version
	I0429 11:44:44.554167    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8496.pem && ln -fs /usr/share/ca-certificates/8496.pem /etc/ssl/certs/8496.pem"
	I0429 11:44:44.588089    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8496.pem
	I0429 11:44:44.595790    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 11:44:44.609346    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8496.pem
	I0429 11:44:44.632204    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8496.pem /etc/ssl/certs/51391683.0"
	I0429 11:44:44.671345    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84962.pem && ln -fs /usr/share/ca-certificates/84962.pem /etc/ssl/certs/84962.pem"
	I0429 11:44:44.706448    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84962.pem
	I0429 11:44:44.714278    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 11:44:44.727262    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84962.pem
	I0429 11:44:44.748436    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84962.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 11:44:44.785175    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 11:44:44.818806    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:44:44.825505    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:44:44.839640    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:44:44.864528    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 11:44:44.899290    5624 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 11:44:44.906248    5624 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 11:44:44.906248    5624 kubeadm.go:928] updating node {m02 172.26.185.80 8443 v1.30.0 docker true true} ...
	I0429 11:44:44.906827    5624 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-437800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.185.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 11:44:44.906956    5624 kube-vip.go:111] generating kube-vip config ...
	I0429 11:44:44.919174    5624 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 11:44:44.944791    5624 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 11:44:44.945325    5624 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.26.191.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 11:44:44.958547    5624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 11:44:44.977181    5624 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 11:44:44.989210    5624 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 11:44:45.010958    5624 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0429 11:44:45.011498    5624 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0429 11:44:45.011498    5624 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0429 11:44:46.088829    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 11:44:46.100862    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 11:44:46.112840    5624 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 11:44:46.112840    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 11:44:47.260346    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 11:44:47.276037    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 11:44:47.284750    5624 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 11:44:47.284750    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 11:44:48.910747    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:44:48.937676    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 11:44:48.950297    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 11:44:48.958051    5624 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 11:44:48.958051    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 11:44:49.573575    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 11:44:49.592202    5624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0429 11:44:49.630020    5624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 11:44:49.670949    5624 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 11:44:49.722997    5624 ssh_runner.go:195] Run: grep 172.26.191.254	control-plane.minikube.internal$ /etc/hosts
	I0429 11:44:49.730921    5624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.191.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:44:49.774728    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:44:50.009298    5624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:44:50.048395    5624 host.go:66] Checking if "ha-437800" exists ...
	I0429 11:44:50.048395    5624 start.go:316] joinCluster: &{Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.185.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:44:50.049413    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 11:44:50.049413    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:44:52.155279    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:44:52.155279    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:52.156005    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:44:54.647307    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:44:54.648258    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:44:54.649035    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:44:54.892042    5624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8425917s)
	I0429 11:44:54.892042    5624 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.26.185.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:44:54.892042    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0kw0b2.qry6qq722q05dz2j --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-437800-m02 --control-plane --apiserver-advertise-address=172.26.185.80 --apiserver-bind-port=8443"
	I0429 11:45:43.171446    5624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0kw0b2.qry6qq722q05dz2j --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-437800-m02 --control-plane --apiserver-advertise-address=172.26.185.80 --apiserver-bind-port=8443": (48.2789691s)
	I0429 11:45:43.171562    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 11:45:44.088437    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-437800-m02 minikube.k8s.io/updated_at=2024_04_29T11_45_44_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d minikube.k8s.io/name=ha-437800 minikube.k8s.io/primary=false
	I0429 11:45:44.273766    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-437800-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 11:45:44.432630    5624 start.go:318] duration metric: took 54.3838103s to joinCluster
	I0429 11:45:44.432630    5624 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.26.185.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:45:44.435930    5624 out.go:177] * Verifying Kubernetes components...
	I0429 11:45:44.433503    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:45:44.452994    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:45:44.866764    5624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:45:44.899509    5624 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:45:44.900401    5624 kapi.go:59] client config for ha-437800: &rest.Config{Host:"https://172.26.191.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-437800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-437800\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 11:45:44.900497    5624 kubeadm.go:477] Overriding stale ClientConfig host https://172.26.191.254:8443 with https://172.26.176.3:8443
	I0429 11:45:44.901420    5624 node_ready.go:35] waiting up to 6m0s for node "ha-437800-m02" to be "Ready" ...
	I0429 11:45:44.901635    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:44.901635    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:44.901635    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:44.901690    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:44.920349    5624 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 11:45:45.416197    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:45.416197    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:45.416197    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:45.416197    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:45.590170    5624 round_trippers.go:574] Response Status: 200 OK in 173 milliseconds
	I0429 11:45:45.904939    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:45.905073    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:45.905073    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:45.905073    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:45.910522    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:46.410919    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:46.411022    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:46.411022    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:46.411022    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:46.428796    5624 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0429 11:45:46.901970    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:46.901970    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:46.901970    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:46.902276    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:46.915963    5624 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 11:45:46.916193    5624 node_ready.go:53] node "ha-437800-m02" has status "Ready":"False"
	I0429 11:45:47.403594    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:47.403594    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:47.403594    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:47.403594    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:47.407449    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:47.910257    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:47.910295    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:47.910324    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:47.910324    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:47.915937    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:48.415959    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:48.415959    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:48.415959    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:48.415959    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:48.420593    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:48.905257    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:48.905257    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:48.905257    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:48.905257    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:48.911274    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:49.414881    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:49.414881    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:49.414881    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:49.414881    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:49.429464    5624 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 11:45:49.430503    5624 node_ready.go:53] node "ha-437800-m02" has status "Ready":"False"
	I0429 11:45:49.905318    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:49.905370    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:49.905370    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:49.905370    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:50.050527    5624 round_trippers.go:574] Response Status: 200 OK in 145 milliseconds
	I0429 11:45:50.411430    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:50.411430    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:50.411430    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:50.411430    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:50.418059    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:50.913967    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:50.914243    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:50.914243    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:50.914243    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:50.919405    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:51.416973    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:51.416973    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.417263    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.417263    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.421896    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:51.423569    5624 node_ready.go:49] node "ha-437800-m02" has status "Ready":"True"
	I0429 11:45:51.423599    5624 node_ready.go:38] duration metric: took 6.5221281s for node "ha-437800-m02" to be "Ready" ...
	I0429 11:45:51.423666    5624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:45:51.423875    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:45:51.423875    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.423875    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.423875    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.435557    5624 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 11:45:51.445803    5624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vvf4j" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.445803    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vvf4j
	I0429 11:45:51.445803    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.445803    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.445803    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.456515    5624 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 11:45:51.459308    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:51.459308    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.459308    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.459308    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.472673    5624 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 11:45:51.473734    5624 pod_ready.go:92] pod "coredns-7db6d8ff4d-vvf4j" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:51.473734    5624 pod_ready.go:81] duration metric: took 27.931ms for pod "coredns-7db6d8ff4d-vvf4j" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.473734    5624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxvcx" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.473734    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zxvcx
	I0429 11:45:51.473734    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.473734    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.473734    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.484311    5624 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 11:45:51.485235    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:51.485286    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.485286    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.485286    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.491626    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:51.491937    5624 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxvcx" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:51.491937    5624 pod_ready.go:81] duration metric: took 18.2035ms for pod "coredns-7db6d8ff4d-zxvcx" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.491937    5624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.491937    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800
	I0429 11:45:51.491937    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.491937    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.491937    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.501793    5624 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 11:45:51.505535    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:51.505574    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.505574    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.505574    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.521428    5624 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0429 11:45:51.522509    5624 pod_ready.go:92] pod "etcd-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:51.522509    5624 pod_ready.go:81] duration metric: took 30.5716ms for pod "etcd-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.522561    5624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:51.522731    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:51.522770    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.522770    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.522770    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.529057    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:51.529974    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:51.529974    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:51.529974    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:51.529974    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:51.534174    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:52.028329    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:52.028329    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:52.028433    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:52.028433    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:52.032596    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:52.034395    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:52.034523    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:52.034523    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:52.034523    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:52.038967    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:52.530191    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:52.530191    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:52.530191    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:52.530191    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:52.536169    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:52.536982    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:52.536982    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:52.536982    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:52.536982    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:52.541582    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:53.027884    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:53.027884    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:53.027884    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:53.027884    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:53.034466    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:53.036050    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:53.036112    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:53.036112    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:53.036112    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:53.039968    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:53.538452    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:53.538535    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:53.538535    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:53.538535    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:53.543289    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:53.544579    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:53.544579    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:53.544579    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:53.544579    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:53.548666    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:53.549891    5624 pod_ready.go:102] pod "etcd-ha-437800-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 11:45:54.037990    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:54.037990    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:54.038167    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:54.038167    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:54.044978    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:54.047478    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:54.047478    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:54.047478    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:54.047478    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:54.053596    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:54.529895    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:54.529895    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:54.529895    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:54.530008    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:54.536761    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:54.537046    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:54.537670    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:54.537670    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:54.537670    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:54.542507    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:55.035585    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:55.035585    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:55.035585    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:55.035585    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:55.041042    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:55.041918    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:55.041918    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:55.041918    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:55.041918    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:55.053830    5624 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 11:45:55.527254    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:55.527315    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:55.527390    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:55.527390    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:55.532068    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:55.534222    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:55.534222    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:55.534222    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:55.534222    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:55.538460    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:56.033359    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:56.033359    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:56.033359    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:56.033359    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:56.038408    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:56.040284    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:56.040284    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:56.040284    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:56.040284    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:56.045366    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:56.045579    5624 pod_ready.go:102] pod "etcd-ha-437800-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 11:45:56.523859    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:56.523859    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:56.523859    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:56.523859    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:56.528910    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:56.531025    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:56.531097    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:56.531097    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:56.531173    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:56.535342    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:57.028554    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:57.028554    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:57.028648    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:57.028648    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:57.033992    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:57.035442    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:57.035520    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:57.035520    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:57.035520    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:57.040789    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:57.535067    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:57.535067    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:57.535067    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:57.535067    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:57.539696    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:57.540783    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:57.540783    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:57.540783    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:57.540783    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:57.546370    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:58.029249    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:58.029249    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:58.029310    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:58.029310    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:58.034127    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:58.035151    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:58.035214    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:58.035214    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:58.035214    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:58.039772    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:58.536476    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:58.536476    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:58.536476    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:58.536476    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:58.542113    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:58.544014    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:58.544014    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:58.544014    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:58.544014    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:58.549430    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:58.550005    5624 pod_ready.go:102] pod "etcd-ha-437800-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 11:45:59.023811    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:59.023811    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.023811    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.023898    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.029858    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:59.031195    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:59.031252    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.031252    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.031252    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.035725    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:59.531346    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:45:59.531346    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.531346    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.531346    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.535953    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:59.537652    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:59.538181    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.538181    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.538181    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.544846    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:59.545434    5624 pod_ready.go:92] pod "etcd-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:59.545434    5624 pod_ready.go:81] duration metric: took 8.0227797s for pod "etcd-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.545506    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.545619    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800
	I0429 11:45:59.545619    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.545696    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.545696    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.556236    5624 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 11:45:59.557353    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:59.557353    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.557353    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.557353    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.561362    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:59.562601    5624 pod_ready.go:92] pod "kube-apiserver-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:59.562689    5624 pod_ready.go:81] duration metric: took 17.1832ms for pod "kube-apiserver-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.562710    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.562839    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800-m02
	I0429 11:45:59.562916    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.562916    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.562916    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.567498    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:59.568607    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:59.569174    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.569174    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.569174    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.572194    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:59.573884    5624 pod_ready.go:92] pod "kube-apiserver-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:59.573884    5624 pod_ready.go:81] duration metric: took 11.1745ms for pod "kube-apiserver-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.573988    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.574101    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800
	I0429 11:45:59.574101    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.574167    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.574167    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.578958    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:45:59.579763    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:59.579763    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.579763    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.579763    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.583339    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:59.584380    5624 pod_ready.go:92] pod "kube-controller-manager-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:59.584380    5624 pod_ready.go:81] duration metric: took 10.3914ms for pod "kube-controller-manager-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.584380    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.584380    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800-m02
	I0429 11:45:59.584380    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.584380    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.584380    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.588354    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:59.589651    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:45:59.589651    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.589651    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.589651    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.593335    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:45:59.594603    5624 pod_ready.go:92] pod "kube-controller-manager-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:59.594703    5624 pod_ready.go:81] duration metric: took 10.2231ms for pod "kube-controller-manager-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.594703    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hvzz9" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.735558    5624 request.go:629] Waited for 140.4274ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hvzz9
	I0429 11:45:59.735812    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hvzz9
	I0429 11:45:59.735812    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.735812    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.735812    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.742144    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:45:59.941350    5624 request.go:629] Waited for 197.9448ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:59.941740    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:45:59.941740    5624 round_trippers.go:469] Request Headers:
	I0429 11:45:59.941740    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:45:59.941740    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:45:59.947450    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:45:59.948169    5624 pod_ready.go:92] pod "kube-proxy-hvzz9" in "kube-system" namespace has status "Ready":"True"
	I0429 11:45:59.948169    5624 pod_ready.go:81] duration metric: took 353.4633ms for pod "kube-proxy-hvzz9" in "kube-system" namespace to be "Ready" ...
	I0429 11:45:59.948169    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pzfjr" in "kube-system" namespace to be "Ready" ...
	I0429 11:46:00.131790    5624 request.go:629] Waited for 183.515ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzfjr
	I0429 11:46:00.132499    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzfjr
	I0429 11:46:00.132499    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:00.132499    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:00.132499    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:00.137757    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:46:00.335071    5624 request.go:629] Waited for 194.6782ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:46:00.335175    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:46:00.335175    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:00.335237    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:00.335237    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:00.340709    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:46:00.341835    5624 pod_ready.go:92] pod "kube-proxy-pzfjr" in "kube-system" namespace has status "Ready":"True"
	I0429 11:46:00.341835    5624 pod_ready.go:81] duration metric: took 393.5582ms for pod "kube-proxy-pzfjr" in "kube-system" namespace to be "Ready" ...
	I0429 11:46:00.341835    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:46:00.535429    5624 request.go:629] Waited for 193.5922ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800
	I0429 11:46:00.535429    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800
	I0429 11:46:00.535429    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:00.535429    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:00.535429    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:00.541363    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:46:00.740536    5624 request.go:629] Waited for 197.9892ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:46:00.740791    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:46:00.740791    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:00.740791    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:00.740867    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:00.746771    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:46:00.747631    5624 pod_ready.go:92] pod "kube-scheduler-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:46:00.747728    5624 pod_ready.go:81] duration metric: took 405.7921ms for pod "kube-scheduler-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:46:00.747728    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:46:00.946125    5624 request.go:629] Waited for 198.0918ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800-m02
	I0429 11:46:00.946125    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800-m02
	I0429 11:46:00.946401    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:00.946401    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:00.946401    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:00.952201    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:46:01.137294    5624 request.go:629] Waited for 182.7172ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:46:01.137559    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:46:01.137559    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:01.137559    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:01.137559    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:01.143599    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:46:01.145595    5624 pod_ready.go:92] pod "kube-scheduler-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:46:01.145595    5624 pod_ready.go:81] duration metric: took 397.8638ms for pod "kube-scheduler-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:46:01.145595    5624 pod_ready.go:38] duration metric: took 9.7218535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:46:01.145595    5624 api_server.go:52] waiting for apiserver process to appear ...
	I0429 11:46:01.158501    5624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 11:46:01.196449    5624 api_server.go:72] duration metric: took 16.7636888s to wait for apiserver process to appear ...
	I0429 11:46:01.196517    5624 api_server.go:88] waiting for apiserver healthz status ...
	I0429 11:46:01.196580    5624 api_server.go:253] Checking apiserver healthz at https://172.26.176.3:8443/healthz ...
	I0429 11:46:01.204092    5624 api_server.go:279] https://172.26.176.3:8443/healthz returned 200:
	ok
	I0429 11:46:01.204830    5624 round_trippers.go:463] GET https://172.26.176.3:8443/version
	I0429 11:46:01.204863    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:01.204863    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:01.204863    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:01.205705    5624 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0429 11:46:01.206647    5624 api_server.go:141] control plane version: v1.30.0
	I0429 11:46:01.206647    5624 api_server.go:131] duration metric: took 10.1299ms to wait for apiserver health ...
	I0429 11:46:01.206647    5624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 11:46:01.341614    5624 request.go:629] Waited for 134.9143ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:46:01.341773    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:46:01.341773    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:01.341820    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:01.341820    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:01.351197    5624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 11:46:01.358177    5624 system_pods.go:59] 17 kube-system pods found
	I0429 11:46:01.358177    5624 system_pods.go:61] "coredns-7db6d8ff4d-vvf4j" [cc00761a-60fb-4c04-9502-c0aa8b88e45a] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "coredns-7db6d8ff4d-zxvcx" [7f8c7504-7c8b-4d15-bcb0-63320257debc] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "etcd-ha-437800" [4c2ad87e-0a97-4414-bc1c-30c4d5d5b58f] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "etcd-ha-437800-m02" [9bd90d2f-eaff-4f49-acac-669292904ac9] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kindnet-qg7qh" [cba63805-bae0-48e9-93b5-7ed38b14846f] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kindnet-qgbzf" [8e86dd3b-eb48-4bd5-a3f8-38f53d7c2bd8] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-apiserver-ha-437800" [21394aa6-39d0-40b0-9335-e618e86ccbd5] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-apiserver-ha-437800-m02" [167ef62e-bb21-4605-b821-f469de4aedf5] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-controller-manager-ha-437800" [5233d18d-4b1a-4846-84c5-08043f05cd40] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-controller-manager-ha-437800-m02" [881ec6cd-768c-46f0-b10f-56f2a33172f3] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-proxy-hvzz9" [ea3045a9-bcea-4757-80a4-70361f030a6b] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-proxy-pzfjr" [69ec7440-fd5b-4cee-8c37-e4a610b48570] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-scheduler-ha-437800" [db1d725b-2fe3-4ff5-960d-48498bd58597] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-scheduler-ha-437800-m02" [97b2e475-ff85-4601-8ded-f8e759fee82f] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-vip-ha-437800" [b777794b-764c-42d5-8a96-2463488c0738] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "kube-vip-ha-437800-m02" [ed988926-35c5-4fb8-9e43-f50960fa81aa] Running
	I0429 11:46:01.358177    5624 system_pods.go:61] "storage-provisioner" [f3b60672-2de9-4a05-86cc-b3b7ed019410] Running
	I0429 11:46:01.358177    5624 system_pods.go:74] duration metric: took 151.5284ms to wait for pod list to return data ...
	I0429 11:46:01.358177    5624 default_sa.go:34] waiting for default service account to be created ...
	I0429 11:46:01.542386    5624 request.go:629] Waited for 184.2078ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/default/serviceaccounts
	I0429 11:46:01.542721    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/default/serviceaccounts
	I0429 11:46:01.542721    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:01.542721    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:01.542721    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:01.556688    5624 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 11:46:01.557360    5624 default_sa.go:45] found service account: "default"
	I0429 11:46:01.557360    5624 default_sa.go:55] duration metric: took 199.1818ms for default service account to be created ...
	I0429 11:46:01.557360    5624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 11:46:01.745662    5624 request.go:629] Waited for 187.9148ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:46:01.745874    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:46:01.745874    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:01.745874    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:01.745874    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:01.755714    5624 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 11:46:01.766837    5624 system_pods.go:86] 17 kube-system pods found
	I0429 11:46:01.766837    5624 system_pods.go:89] "coredns-7db6d8ff4d-vvf4j" [cc00761a-60fb-4c04-9502-c0aa8b88e45a] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "coredns-7db6d8ff4d-zxvcx" [7f8c7504-7c8b-4d15-bcb0-63320257debc] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "etcd-ha-437800" [4c2ad87e-0a97-4414-bc1c-30c4d5d5b58f] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "etcd-ha-437800-m02" [9bd90d2f-eaff-4f49-acac-669292904ac9] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kindnet-qg7qh" [cba63805-bae0-48e9-93b5-7ed38b14846f] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kindnet-qgbzf" [8e86dd3b-eb48-4bd5-a3f8-38f53d7c2bd8] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-apiserver-ha-437800" [21394aa6-39d0-40b0-9335-e618e86ccbd5] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-apiserver-ha-437800-m02" [167ef62e-bb21-4605-b821-f469de4aedf5] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-controller-manager-ha-437800" [5233d18d-4b1a-4846-84c5-08043f05cd40] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-controller-manager-ha-437800-m02" [881ec6cd-768c-46f0-b10f-56f2a33172f3] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-proxy-hvzz9" [ea3045a9-bcea-4757-80a4-70361f030a6b] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-proxy-pzfjr" [69ec7440-fd5b-4cee-8c37-e4a610b48570] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-scheduler-ha-437800" [db1d725b-2fe3-4ff5-960d-48498bd58597] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-scheduler-ha-437800-m02" [97b2e475-ff85-4601-8ded-f8e759fee82f] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-vip-ha-437800" [b777794b-764c-42d5-8a96-2463488c0738] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "kube-vip-ha-437800-m02" [ed988926-35c5-4fb8-9e43-f50960fa81aa] Running
	I0429 11:46:01.766837    5624 system_pods.go:89] "storage-provisioner" [f3b60672-2de9-4a05-86cc-b3b7ed019410] Running
	I0429 11:46:01.766837    5624 system_pods.go:126] duration metric: took 209.4756ms to wait for k8s-apps to be running ...
	I0429 11:46:01.766837    5624 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 11:46:01.780207    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:46:01.808832    5624 system_svc.go:56] duration metric: took 41.9945ms WaitForService to wait for kubelet
	I0429 11:46:01.808925    5624 kubeadm.go:576] duration metric: took 17.3761597s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 11:46:01.808995    5624 node_conditions.go:102] verifying NodePressure condition ...
	I0429 11:46:01.934720    5624 request.go:629] Waited for 125.4108ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes
	I0429 11:46:01.934902    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes
	I0429 11:46:01.934902    5624 round_trippers.go:469] Request Headers:
	I0429 11:46:01.934902    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:46:01.934995    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:46:01.943451    5624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 11:46:01.945023    5624 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 11:46:01.945023    5624 node_conditions.go:123] node cpu capacity is 2
	I0429 11:46:01.945023    5624 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 11:46:01.945023    5624 node_conditions.go:123] node cpu capacity is 2
	I0429 11:46:01.945023    5624 node_conditions.go:105] duration metric: took 136.0269ms to run NodePressure ...
	I0429 11:46:01.945023    5624 start.go:240] waiting for startup goroutines ...
	I0429 11:46:01.945023    5624 start.go:254] writing updated cluster config ...
	I0429 11:46:01.949229    5624 out.go:177] 
	I0429 11:46:01.964913    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:46:01.964913    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:46:01.969831    5624 out.go:177] * Starting "ha-437800-m03" control-plane node in "ha-437800" cluster
	I0429 11:46:01.973470    5624 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 11:46:01.973470    5624 cache.go:56] Caching tarball of preloaded images
	I0429 11:46:01.973470    5624 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 11:46:01.973996    5624 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 11:46:01.974268    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:46:01.983093    5624 start.go:360] acquireMachinesLock for ha-437800-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 11:46:01.983093    5624 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-437800-m03"
	I0429 11:46:01.983093    5624 start.go:93] Provisioning new machine with config: &{Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.185.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:46:01.983649    5624 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0429 11:46:01.986594    5624 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 11:46:01.986808    5624 start.go:159] libmachine.API.Create for "ha-437800" (driver="hyperv")
	I0429 11:46:01.986808    5624 client.go:168] LocalClient.Create starting
	I0429 11:46:01.986808    5624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 11:46:01.986808    5624 main.go:141] libmachine: Decoding PEM data...
	I0429 11:46:01.986808    5624 main.go:141] libmachine: Parsing certificate...
	I0429 11:46:01.986808    5624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 11:46:01.988037    5624 main.go:141] libmachine: Decoding PEM data...
	I0429 11:46:01.988181    5624 main.go:141] libmachine: Parsing certificate...
	I0429 11:46:01.988244    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 11:46:03.935181    5624 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 11:46:03.935181    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:03.935181    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 11:46:05.768353    5624 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 11:46:05.768982    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:05.769070    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 11:46:07.402106    5624 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 11:46:07.402106    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:07.402825    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 11:46:11.262097    5624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 11:46:11.262170    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:11.264343    5624 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 11:46:11.756290    5624 main.go:141] libmachine: Creating SSH key...
	I0429 11:46:11.880364    5624 main.go:141] libmachine: Creating VM...
	I0429 11:46:11.880364    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 11:46:14.917836    5624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 11:46:14.917836    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:14.918670    5624 main.go:141] libmachine: Using switch "Default Switch"
	I0429 11:46:14.918843    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 11:46:16.781713    5624 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 11:46:16.782623    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:16.782623    5624 main.go:141] libmachine: Creating VHD
	I0429 11:46:16.782623    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 11:46:20.542223    5624 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CF6A7843-27CB-4BA6-9EA2-8DFB317FB644
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 11:46:20.542624    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:20.542624    5624 main.go:141] libmachine: Writing magic tar header
	I0429 11:46:20.542624    5624 main.go:141] libmachine: Writing SSH key tar header
	I0429 11:46:20.552938    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 11:46:23.795321    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:23.796308    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:23.796308    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\disk.vhd' -SizeBytes 20000MB
	I0429 11:46:26.338853    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:26.338853    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:26.338853    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-437800-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 11:46:30.104389    5624 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-437800-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 11:46:30.104478    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:30.104478    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-437800-m03 -DynamicMemoryEnabled $false
	I0429 11:46:32.323266    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:32.323266    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:32.323266    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-437800-m03 -Count 2
	I0429 11:46:34.533841    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:34.533841    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:34.533841    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-437800-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\boot2docker.iso'
	I0429 11:46:37.111647    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:37.112216    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:37.112312    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-437800-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\disk.vhd'
	I0429 11:46:39.764115    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:39.764559    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:39.764559    5624 main.go:141] libmachine: Starting VM...
	I0429 11:46:39.764559    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-437800-m03
	I0429 11:46:42.902101    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:42.902101    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:42.902101    5624 main.go:141] libmachine: Waiting for host to start...
	I0429 11:46:42.902101    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:46:45.237975    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:46:45.238553    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:45.238553    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:46:47.821560    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:47.821560    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:48.824829    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:46:51.089473    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:46:51.089473    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:51.089473    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:46:53.729210    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:53.729210    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:54.741468    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:46:56.941165    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:46:56.941235    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:46:56.941235    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:46:59.496536    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:46:59.496606    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:00.496647    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:02.729686    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:02.729781    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:02.729848    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:05.345785    5624 main.go:141] libmachine: [stdout =====>] : 
	I0429 11:47:05.345785    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:06.359637    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:08.583215    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:08.584072    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:08.584163    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:11.211931    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:11.212513    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:11.212713    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:13.398329    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:13.398381    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:13.398381    5624 machine.go:94] provisionDockerMachine start ...
	I0429 11:47:13.398381    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:15.651373    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:15.651373    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:15.651537    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:18.261274    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:18.261754    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:18.269219    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:47:18.281521    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:47:18.281521    5624 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 11:47:18.413340    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 11:47:18.413340    5624 buildroot.go:166] provisioning hostname "ha-437800-m03"
	I0429 11:47:18.413340    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:20.571577    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:20.572096    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:20.572297    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:23.166526    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:23.166526    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:23.173476    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:47:23.173476    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:47:23.173476    5624 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-437800-m03 && echo "ha-437800-m03" | sudo tee /etc/hostname
	I0429 11:47:23.340958    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-437800-m03
	
	I0429 11:47:23.340958    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:25.494819    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:25.494819    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:25.495491    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:28.117858    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:28.117858    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:28.124316    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:47:28.125046    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:47:28.125046    5624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-437800-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-437800-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-437800-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:47:28.270803    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:47:28.270900    5624 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 11:47:28.270968    5624 buildroot.go:174] setting up certificates
	I0429 11:47:28.271022    5624 provision.go:84] configureAuth start
	I0429 11:47:28.271022    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:30.406485    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:30.406694    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:30.406694    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:32.993151    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:32.994166    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:32.994166    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:35.127784    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:35.127878    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:35.127878    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:37.680355    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:37.680355    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:37.680355    5624 provision.go:143] copyHostCerts
	I0429 11:47:37.680355    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 11:47:37.681376    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 11:47:37.681376    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 11:47:37.681376    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 11:47:37.682373    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 11:47:37.682373    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 11:47:37.682373    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 11:47:37.683375    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 11:47:37.684372    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 11:47:37.684372    5624 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 11:47:37.684372    5624 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 11:47:37.684372    5624 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 11:47:37.685377    5624 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-437800-m03 san=[127.0.0.1 172.26.177.113 ha-437800-m03 localhost minikube]
	I0429 11:47:37.858334    5624 provision.go:177] copyRemoteCerts
	I0429 11:47:37.871840    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:47:37.871840    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:40.013723    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:40.013784    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:40.013784    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:42.636118    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:42.636118    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:42.636118    5624 sshutil.go:53] new ssh client: &{IP:172.26.177.113 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\id_rsa Username:docker}
	I0429 11:47:42.751004    5624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.879075s)
	I0429 11:47:42.751058    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 11:47:42.751181    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 11:47:42.801862    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 11:47:42.801862    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 11:47:42.854096    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 11:47:42.854544    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 11:47:42.905578    5624 provision.go:87] duration metric: took 14.6343612s to configureAuth
	I0429 11:47:42.905638    5624 buildroot.go:189] setting minikube options for container-runtime
	I0429 11:47:42.906384    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:47:42.906452    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:45.069113    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:45.069113    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:45.069591    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:47.653940    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:47.653940    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:47.659475    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:47:47.660395    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:47:47.660490    5624 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 11:47:47.798799    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 11:47:47.798799    5624 buildroot.go:70] root file system type: tmpfs
	I0429 11:47:47.798799    5624 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 11:47:47.798799    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:49.922938    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:49.922938    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:49.922938    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:52.547974    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:52.548746    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:52.555517    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:47:52.556188    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:47:52.556188    5624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.26.176.3"
	Environment="NO_PROXY=172.26.176.3,172.26.185.80"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 11:47:52.730280    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.26.176.3
	Environment=NO_PROXY=172.26.176.3,172.26.185.80
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 11:47:52.730280    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:47:54.868090    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:47:54.868090    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:54.868462    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:47:57.517408    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:47:57.517408    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:47:57.525741    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:47:57.525873    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:47:57.525873    5624 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 11:47:59.747511    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 11:47:59.747511    5624 machine.go:97] duration metric: took 46.3487684s to provisionDockerMachine
	I0429 11:47:59.747511    5624 client.go:171] duration metric: took 1m57.7597842s to LocalClient.Create
	I0429 11:47:59.747714    5624 start.go:167] duration metric: took 1m57.7599876s to libmachine.API.Create "ha-437800"
	I0429 11:47:59.747796    5624 start.go:293] postStartSetup for "ha-437800-m03" (driver="hyperv")
	I0429 11:47:59.747796    5624 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:47:59.760743    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:47:59.760743    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:01.884362    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:01.884362    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:01.884362    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:04.449517    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:04.449517    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:04.450554    5624 sshutil.go:53] new ssh client: &{IP:172.26.177.113 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\id_rsa Username:docker}
	I0429 11:48:04.554020    5624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7931328s)
	I0429 11:48:04.572980    5624 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:48:04.581788    5624 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 11:48:04.581844    5624 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 11:48:04.581984    5624 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 11:48:04.583341    5624 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 11:48:04.583341    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 11:48:04.600142    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 11:48:04.620153    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 11:48:04.668576    5624 start.go:296] duration metric: took 4.9207417s for postStartSetup
	I0429 11:48:04.671227    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:06.814484    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:06.814484    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:06.814855    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:09.430083    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:09.430735    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:09.430990    5624 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\config.json ...
	I0429 11:48:09.433320    5624 start.go:128] duration metric: took 2m7.4486755s to createHost
	I0429 11:48:09.433530    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:11.569737    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:11.569737    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:11.570760    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:14.187728    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:14.188594    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:14.195144    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:48:14.195561    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:48:14.195648    5624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 11:48:14.314645    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714391294.307573355
	
	I0429 11:48:14.314645    5624 fix.go:216] guest clock: 1714391294.307573355
	I0429 11:48:14.314645    5624 fix.go:229] Guest: 2024-04-29 11:48:14.307573355 +0000 UTC Remote: 2024-04-29 11:48:09.4334711 +0000 UTC m=+573.120707401 (delta=4.874102255s)
	I0429 11:48:14.314645    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:16.456121    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:16.456266    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:16.456396    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:19.086912    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:19.086912    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:19.094693    5624 main.go:141] libmachine: Using SSH client type: native
	I0429 11:48:19.094693    5624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.177.113 22 <nil> <nil>}
	I0429 11:48:19.094693    5624 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714391294
	I0429 11:48:19.239356    5624 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 11:48:14 UTC 2024
	
	I0429 11:48:19.239449    5624 fix.go:236] clock set: Mon Apr 29 11:48:14 UTC 2024
	 (err=<nil>)
	I0429 11:48:19.239449    5624 start.go:83] releasing machines lock for "ha-437800-m03", held for 2m17.2552844s
	I0429 11:48:19.239672    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:21.362263    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:21.362263    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:21.362906    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:23.943534    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:23.944566    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:23.951696    5624 out.go:177] * Found network options:
	I0429 11:48:23.955390    5624 out.go:177]   - NO_PROXY=172.26.176.3,172.26.185.80
	W0429 11:48:23.957968    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 11:48:23.957968    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 11:48:23.960050    5624 out.go:177]   - NO_PROXY=172.26.176.3,172.26.185.80
	W0429 11:48:23.962993    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 11:48:23.962993    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 11:48:23.963993    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 11:48:23.963993    5624 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 11:48:23.967383    5624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:48:23.967592    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:23.979340    5624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 11:48:23.979340    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 11:48:26.178556    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:26.178556    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:26.178685    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:26.178941    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:26.178941    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:26.179070    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:28.883720    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:28.884563    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:28.884884    5624 sshutil.go:53] new ssh client: &{IP:172.26.177.113 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\id_rsa Username:docker}
	I0429 11:48:28.912393    5624 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 11:48:28.912508    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:28.913175    5624 sshutil.go:53] new ssh client: &{IP:172.26.177.113 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\id_rsa Username:docker}
	I0429 11:48:29.207261    5624 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2278798s)
	W0429 11:48:29.207367    5624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 11:48:29.207367    5624 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2398468s)
	I0429 11:48:29.225176    5624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:48:29.257993    5624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 11:48:29.257993    5624 start.go:494] detecting cgroup driver to use...
	I0429 11:48:29.257993    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:48:29.307882    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 11:48:29.342536    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 11:48:29.363578    5624 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 11:48:29.376599    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 11:48:29.412496    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:48:29.446572    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 11:48:29.480855    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:48:29.515753    5624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:48:29.549677    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 11:48:29.585777    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 11:48:29.622568    5624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 11:48:29.662728    5624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:48:29.697682    5624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:48:29.731627    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:48:29.953855    5624 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 11:48:29.989119    5624 start.go:494] detecting cgroup driver to use...
	I0429 11:48:30.002975    5624 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 11:48:30.045795    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:48:30.086990    5624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:48:30.142099    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:48:30.185868    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:48:30.223663    5624 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 11:48:30.293850    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:48:30.323682    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:48:30.376694    5624 ssh_runner.go:195] Run: which cri-dockerd
	I0429 11:48:30.396234    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 11:48:30.414468    5624 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 11:48:30.464785    5624 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 11:48:30.685362    5624 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 11:48:30.884801    5624 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 11:48:30.884930    5624 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 11:48:30.937469    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:48:31.159997    5624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 11:48:33.786003    5624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6259846s)
	I0429 11:48:33.799736    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 11:48:33.840506    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 11:48:33.878103    5624 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 11:48:34.106428    5624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 11:48:34.323202    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:48:34.559839    5624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 11:48:34.607452    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 11:48:34.651341    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:48:34.875385    5624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 11:48:34.989218    5624 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 11:48:35.005720    5624 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 11:48:35.014247    5624 start.go:562] Will wait 60s for crictl version
	I0429 11:48:35.028741    5624 ssh_runner.go:195] Run: which crictl
	I0429 11:48:35.049460    5624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 11:48:35.114709    5624 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 11:48:35.124732    5624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 11:48:35.169763    5624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 11:48:35.211496    5624 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 11:48:35.215480    5624 out.go:177]   - env NO_PROXY=172.26.176.3
	I0429 11:48:35.221141    5624 out.go:177]   - env NO_PROXY=172.26.176.3,172.26.185.80
	I0429 11:48:35.222988    5624 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 11:48:35.226995    5624 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 11:48:35.227993    5624 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 11:48:35.227993    5624 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 11:48:35.227993    5624 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:da:8e:53 Flags:up|broadcast|multicast|running}
	I0429 11:48:35.230702    5624 ip.go:210] interface addr: fe80::e4d4:6d70:21fb:68f3/64
	I0429 11:48:35.230702    5624 ip.go:210] interface addr: 172.26.176.1/20
	I0429 11:48:35.244718    5624 ssh_runner.go:195] Run: grep 172.26.176.1	host.minikube.internal$ /etc/hosts
	I0429 11:48:35.252357    5624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:48:35.279890    5624 mustload.go:65] Loading cluster: ha-437800
	I0429 11:48:35.280627    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:48:35.281075    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:48:37.407722    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:37.407722    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:37.408809    5624 host.go:66] Checking if "ha-437800" exists ...
	I0429 11:48:37.409554    5624 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800 for IP: 172.26.177.113
	I0429 11:48:37.409554    5624 certs.go:194] generating shared ca certs ...
	I0429 11:48:37.409554    5624 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:48:37.409872    5624 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 11:48:37.410549    5624 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 11:48:37.410989    5624 certs.go:256] generating profile certs ...
	I0429 11:48:37.411597    5624 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\client.key
	I0429 11:48:37.411745    5624 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.5e79a387
	I0429 11:48:37.411876    5624 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.5e79a387 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.26.176.3 172.26.185.80 172.26.177.113 172.26.191.254]
	I0429 11:48:37.985473    5624 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.5e79a387 ...
	I0429 11:48:37.985473    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.5e79a387: {Name:mk8f284536de05666171e9d2eb24ea992ac72bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:48:37.987600    5624 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.5e79a387 ...
	I0429 11:48:37.987600    5624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.5e79a387: {Name:mk2c8d4a06d020bda3f33fab6a0deb8a93c9ba22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:48:37.988713    5624 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt.5e79a387 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt
	I0429 11:48:38.000248    5624 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key.5e79a387 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key
	I0429 11:48:38.001918    5624 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key
	I0429 11:48:38.001918    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 11:48:38.002623    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 11:48:38.002839    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 11:48:38.002839    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 11:48:38.002839    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 11:48:38.003363    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 11:48:38.003439    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 11:48:38.003439    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 11:48:38.004187    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem (1338 bytes)
	W0429 11:48:38.004384    5624 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496_empty.pem, impossibly tiny 0 bytes
	I0429 11:48:38.004384    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0429 11:48:38.004384    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 11:48:38.005034    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 11:48:38.005034    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 11:48:38.005671    5624 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem (1708 bytes)
	I0429 11:48:38.005671    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /usr/share/ca-certificates/84962.pem
	I0429 11:48:38.006231    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:48:38.006402    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem -> /usr/share/ca-certificates/8496.pem
	I0429 11:48:38.006817    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:48:40.208926    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:40.208926    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:40.209398    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:42.847465    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:48:42.847465    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:42.848280    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:48:42.948236    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 11:48:42.957789    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 11:48:42.998369    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 11:48:43.006810    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0429 11:48:43.044908    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 11:48:43.053782    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 11:48:43.090147    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 11:48:43.099041    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0429 11:48:43.136646    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 11:48:43.144722    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 11:48:43.187102    5624 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 11:48:43.197106    5624 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0429 11:48:43.221711    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 11:48:43.278599    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 11:48:43.337198    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 11:48:43.388186    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 11:48:43.439253    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0429 11:48:43.489075    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 11:48:43.541788    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 11:48:43.594574    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-437800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 11:48:43.646409    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /usr/share/ca-certificates/84962.pem (1708 bytes)
	I0429 11:48:43.696192    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 11:48:43.745945    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem --> /usr/share/ca-certificates/8496.pem (1338 bytes)
	I0429 11:48:43.797843    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 11:48:43.829408    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0429 11:48:43.865585    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 11:48:43.900330    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0429 11:48:43.935358    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 11:48:43.973524    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0429 11:48:44.008647    5624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 11:48:44.069863    5624 ssh_runner.go:195] Run: openssl version
	I0429 11:48:44.093850    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 11:48:44.129529    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:48:44.137218    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:48:44.150395    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:48:44.171446    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 11:48:44.208595    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8496.pem && ln -fs /usr/share/ca-certificates/8496.pem /etc/ssl/certs/8496.pem"
	I0429 11:48:44.245233    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8496.pem
	I0429 11:48:44.254607    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 11:48:44.268606    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8496.pem
	I0429 11:48:44.293153    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8496.pem /etc/ssl/certs/51391683.0"
	I0429 11:48:44.334059    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84962.pem && ln -fs /usr/share/ca-certificates/84962.pem /etc/ssl/certs/84962.pem"
	I0429 11:48:44.373980    5624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84962.pem
	I0429 11:48:44.382159    5624 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 11:48:44.395190    5624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84962.pem
	I0429 11:48:44.420888    5624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84962.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 11:48:44.458515    5624 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 11:48:44.466343    5624 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 11:48:44.466669    5624 kubeadm.go:928] updating node {m03 172.26.177.113 8443 v1.30.0 docker true true} ...
	I0429 11:48:44.466967    5624 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-437800-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.177.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 11:48:44.467107    5624 kube-vip.go:111] generating kube-vip config ...
	I0429 11:48:44.482880    5624 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 11:48:44.514531    5624 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 11:48:44.514649    5624 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.26.191.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 11:48:44.530620    5624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 11:48:44.554250    5624 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 11:48:44.568402    5624 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 11:48:44.591286    5624 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 11:48:44.591286    5624 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0429 11:48:44.591286    5624 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0429 11:48:44.591286    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 11:48:44.591286    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 11:48:44.608026    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:48:44.609600    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 11:48:44.609600    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 11:48:44.636797    5624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 11:48:44.636797    5624 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 11:48:44.637077    5624 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 11:48:44.637119    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 11:48:44.637216    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 11:48:44.650825    5624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 11:48:44.720177    5624 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 11:48:44.720177    5624 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 11:48:46.020766    5624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 11:48:46.045009    5624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0429 11:48:46.086515    5624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 11:48:46.122290    5624 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 11:48:46.169395    5624 ssh_runner.go:195] Run: grep 172.26.191.254	control-plane.minikube.internal$ /etc/hosts
	I0429 11:48:46.177101    5624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.191.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:48:46.217764    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:48:46.429770    5624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:48:46.464260    5624 host.go:66] Checking if "ha-437800" exists ...
	I0429 11:48:46.465051    5624 start.go:316] joinCluster: &{Name:ha-437800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-437800 Namespace:default APIServerHAVIP:172.26.191.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.176.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.185.80 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.26.177.113 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:48:46.465215    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 11:48:46.465294    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 11:48:48.619574    5624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 11:48:48.619574    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:48.619574    5624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 11:48:51.260864    5624 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 11:48:51.260864    5624 main.go:141] libmachine: [stderr =====>] : 
	I0429 11:48:51.260864    5624 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 11:48:51.475822    5624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0105685s)
	I0429 11:48:51.475977    5624 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.26.177.113 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:48:51.475977    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 506mov.idrjb78fiqa494du --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-437800-m03 --control-plane --apiserver-advertise-address=172.26.177.113 --apiserver-bind-port=8443"
	I0429 11:49:36.038236    5624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 506mov.idrjb78fiqa494du --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-437800-m03 --control-plane --apiserver-advertise-address=172.26.177.113 --apiserver-bind-port=8443": (44.5617936s)
	I0429 11:49:36.038361    5624 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 11:49:36.869696    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-437800-m03 minikube.k8s.io/updated_at=2024_04_29T11_49_36_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d minikube.k8s.io/name=ha-437800 minikube.k8s.io/primary=false
	I0429 11:49:37.053684    5624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-437800-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 11:49:37.217226    5624 start.go:318] duration metric: took 50.751777s to joinCluster
	I0429 11:49:37.217226    5624 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.26.177.113 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 11:49:37.218398    5624 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:49:37.222225    5624 out.go:177] * Verifying Kubernetes components...
	I0429 11:49:37.237511    5624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:49:37.601991    5624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:49:37.635628    5624 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:49:37.636430    5624 kapi.go:59] client config for ha-437800: &rest.Config{Host:"https://172.26.191.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-437800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-437800\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 11:49:37.636567    5624 kubeadm.go:477] Overriding stale ClientConfig host https://172.26.191.254:8443 with https://172.26.176.3:8443
	I0429 11:49:37.637433    5624 node_ready.go:35] waiting up to 6m0s for node "ha-437800-m03" to be "Ready" ...
	I0429 11:49:37.637756    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:37.637756    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:37.637756    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:37.637756    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:37.651755    5624 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 11:49:38.152578    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:38.152578    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:38.152578    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:38.152578    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:38.157168    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:38.642229    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:38.642229    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:38.642486    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:38.642486    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:38.649066    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:49:39.146439    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:39.146496    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:39.146496    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:39.146496    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:39.150379    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:49:39.652743    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:39.652743    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:39.652818    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:39.652818    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:39.657213    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:39.659818    5624 node_ready.go:53] node "ha-437800-m03" has status "Ready":"False"
	I0429 11:49:40.143391    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:40.143391    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:40.143622    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:40.143622    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:40.148081    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:40.648885    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:40.648885    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:40.648885    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:40.648885    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:40.654846    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:41.138051    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:41.138164    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:41.138164    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:41.138164    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:41.143519    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:41.642216    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:41.642216    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:41.642216    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:41.642216    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:41.647897    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:42.145757    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:42.146023    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:42.146023    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:42.146023    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:42.150650    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:42.151813    5624 node_ready.go:53] node "ha-437800-m03" has status "Ready":"False"
	I0429 11:49:42.651964    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:42.652095    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:42.652095    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:42.652095    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:42.660515    5624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 11:49:43.140284    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:43.140284    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:43.140284    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:43.140284    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:43.248924    5624 round_trippers.go:574] Response Status: 200 OK in 108 milliseconds
	I0429 11:49:43.642727    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:43.642727    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:43.642727    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:43.642727    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:43.647206    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:44.146188    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:44.146188    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:44.146188    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:44.146188    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:44.203686    5624 round_trippers.go:574] Response Status: 200 OK in 57 milliseconds
	I0429 11:49:44.205727    5624 node_ready.go:53] node "ha-437800-m03" has status "Ready":"False"
	I0429 11:49:44.650910    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:44.650910    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:44.650910    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:44.650910    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:44.662258    5624 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 11:49:45.151704    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:45.151704    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:45.151924    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:45.151924    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:45.167548    5624 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 11:49:45.642135    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:45.642135    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:45.642135    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:45.642135    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:45.648179    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:46.144863    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:46.144959    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:46.144959    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:46.144959    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:46.149829    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:46.649758    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:46.649758    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:46.649758    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:46.649758    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:46.654765    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:46.655741    5624 node_ready.go:53] node "ha-437800-m03" has status "Ready":"False"
	I0429 11:49:47.142009    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:47.142239    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:47.142239    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:47.142239    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:47.147501    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:47.646447    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:47.646447    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:47.646447    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:47.646447    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:47.652147    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:48.147412    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:48.147412    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:48.147503    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:48.147503    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:48.157026    5624 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 11:49:48.651054    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:48.651166    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:48.651166    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:48.651166    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:48.655454    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:48.656966    5624 node_ready.go:53] node "ha-437800-m03" has status "Ready":"False"
	I0429 11:49:49.150180    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:49.150358    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:49.150358    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:49.150358    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:49.155141    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:49.650360    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:49.650610    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:49.650610    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:49.650610    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:49.659599    5624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 11:49:50.152440    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:50.152440    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.152440    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.152440    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.158076    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:50.158891    5624 node_ready.go:49] node "ha-437800-m03" has status "Ready":"True"
	I0429 11:49:50.158960    5624 node_ready.go:38] duration metric: took 12.5213623s for node "ha-437800-m03" to be "Ready" ...
	I0429 11:49:50.158960    5624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:49:50.159027    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:49:50.159027    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.159027    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.159027    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.170847    5624 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 11:49:50.179668    5624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vvf4j" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.179668    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vvf4j
	I0429 11:49:50.179668    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.179668    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.179668    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.185670    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:49:50.186727    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:50.186727    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.186727    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.186727    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.191665    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:50.191665    5624 pod_ready.go:92] pod "coredns-7db6d8ff4d-vvf4j" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:50.191665    5624 pod_ready.go:81] duration metric: took 11.9965ms for pod "coredns-7db6d8ff4d-vvf4j" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.191665    5624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxvcx" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.191665    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zxvcx
	I0429 11:49:50.191665    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.191665    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.191665    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.195720    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:50.196665    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:50.196665    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.196665    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.196665    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.200649    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:49:50.201716    5624 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxvcx" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:50.201716    5624 pod_ready.go:81] duration metric: took 10.051ms for pod "coredns-7db6d8ff4d-zxvcx" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.201716    5624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.201716    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800
	I0429 11:49:50.201716    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.201716    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.201716    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.204676    5624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 11:49:50.205653    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:50.205653    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.205653    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.205653    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.209665    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:50.210658    5624 pod_ready.go:92] pod "etcd-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:50.210658    5624 pod_ready.go:81] duration metric: took 8.9415ms for pod "etcd-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.210658    5624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.210658    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m02
	I0429 11:49:50.210658    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.210658    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.210658    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.213669    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:49:50.215211    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:50.215211    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.215211    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.215211    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.219818    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:50.221083    5624 pod_ready.go:92] pod "etcd-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:50.221083    5624 pod_ready.go:81] duration metric: took 10.4253ms for pod "etcd-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.221083    5624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.357368    5624 request.go:629] Waited for 136.2836ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m03
	I0429 11:49:50.357902    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/etcd-ha-437800-m03
	I0429 11:49:50.357976    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.357976    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.357976    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.364335    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:49:50.562194    5624 request.go:629] Waited for 196.3432ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:50.562194    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:50.562194    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.562194    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.562194    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.567100    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:50.568606    5624 pod_ready.go:92] pod "etcd-ha-437800-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:50.568606    5624 pod_ready.go:81] duration metric: took 347.5205ms for pod "etcd-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.568606    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.765156    5624 request.go:629] Waited for 196.3034ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800
	I0429 11:49:50.765293    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800
	I0429 11:49:50.765293    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.765293    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.765293    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.770665    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:50.953784    5624 request.go:629] Waited for 180.9009ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:50.953883    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:50.953883    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:50.953883    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:50.953883    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:50.959392    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:50.960580    5624 pod_ready.go:92] pod "kube-apiserver-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:50.960580    5624 pod_ready.go:81] duration metric: took 391.9709ms for pod "kube-apiserver-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:50.960580    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:51.157035    5624 request.go:629] Waited for 195.6473ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800-m02
	I0429 11:49:51.157295    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800-m02
	I0429 11:49:51.157295    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:51.157295    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:51.157295    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:51.174304    5624 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0429 11:49:51.361375    5624 request.go:629] Waited for 184.594ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:51.361642    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:51.361642    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:51.361642    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:51.361642    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:51.371599    5624 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 11:49:51.372293    5624 pod_ready.go:92] pod "kube-apiserver-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:51.372293    5624 pod_ready.go:81] duration metric: took 411.7094ms for pod "kube-apiserver-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:51.372293    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:51.562808    5624 request.go:629] Waited for 190.3945ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800-m03
	I0429 11:49:51.563020    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-437800-m03
	I0429 11:49:51.563020    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:51.563020    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:51.563020    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:51.571023    5624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 11:49:51.767691    5624 request.go:629] Waited for 195.7151ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:51.767691    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:51.767691    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:51.767691    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:51.767691    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:51.771377    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:49:51.772692    5624 pod_ready.go:92] pod "kube-apiserver-ha-437800-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:51.772905    5624 pod_ready.go:81] duration metric: took 400.3265ms for pod "kube-apiserver-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:51.772905    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:51.961458    5624 request.go:629] Waited for 188.2851ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800
	I0429 11:49:51.961548    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800
	I0429 11:49:51.961548    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:51.961548    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:51.961548    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:51.966447    5624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 11:49:52.166287    5624 request.go:629] Waited for 198.253ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:52.166560    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:52.166560    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:52.166560    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:52.166622    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:52.170954    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:52.172041    5624 pod_ready.go:92] pod "kube-controller-manager-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:52.172041    5624 pod_ready.go:81] duration metric: took 399.0662ms for pod "kube-controller-manager-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:52.172041    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:52.353242    5624 request.go:629] Waited for 181.0336ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800-m02
	I0429 11:49:52.353437    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800-m02
	I0429 11:49:52.353502    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:52.353502    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:52.353502    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:52.359280    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:52.557760    5624 request.go:629] Waited for 196.655ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:52.557760    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:52.557958    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:52.557958    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:52.558009    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:52.562768    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:52.564210    5624 pod_ready.go:92] pod "kube-controller-manager-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:52.564270    5624 pod_ready.go:81] duration metric: took 392.2261ms for pod "kube-controller-manager-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:52.564270    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:52.759774    5624 request.go:629] Waited for 195.259ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800-m03
	I0429 11:49:52.759774    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-437800-m03
	I0429 11:49:52.759976    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:52.759976    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:52.759976    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:52.767012    5624 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 11:49:52.961408    5624 request.go:629] Waited for 193.4518ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:52.961585    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:52.961756    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:52.961818    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:52.961818    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:52.967463    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:52.968920    5624 pod_ready.go:92] pod "kube-controller-manager-ha-437800-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:52.968920    5624 pod_ready.go:81] duration metric: took 404.6467ms for pod "kube-controller-manager-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:52.968920    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2tjfd" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:53.163020    5624 request.go:629] Waited for 193.8936ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2tjfd
	I0429 11:49:53.163212    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2tjfd
	I0429 11:49:53.163212    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:53.163212    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:53.163212    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:53.170074    5624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 11:49:53.366408    5624 request.go:629] Waited for 194.5578ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:53.366620    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:53.366620    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:53.366675    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:53.366675    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:53.375248    5624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 11:49:53.377274    5624 pod_ready.go:92] pod "kube-proxy-2tjfd" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:53.377361    5624 pod_ready.go:81] duration metric: took 408.4386ms for pod "kube-proxy-2tjfd" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:53.377361    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hvzz9" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:53.553033    5624 request.go:629] Waited for 175.573ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hvzz9
	I0429 11:49:53.553255    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hvzz9
	I0429 11:49:53.553255    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:53.553255    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:53.553365    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:53.560459    5624 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 11:49:53.755715    5624 request.go:629] Waited for 194.2778ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:53.756052    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:53.756052    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:53.756052    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:53.756052    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:53.761246    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:53.762903    5624 pod_ready.go:92] pod "kube-proxy-hvzz9" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:53.762988    5624 pod_ready.go:81] duration metric: took 385.6239ms for pod "kube-proxy-hvzz9" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:53.762988    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pzfjr" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:53.958079    5624 request.go:629] Waited for 195.0082ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzfjr
	I0429 11:49:53.958510    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzfjr
	I0429 11:49:53.958610    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:53.958610    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:53.958610    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:53.964497    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:54.160852    5624 request.go:629] Waited for 195.234ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:54.160919    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:54.161034    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:54.161034    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:54.161034    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:54.165681    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:54.167413    5624 pod_ready.go:92] pod "kube-proxy-pzfjr" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:54.167413    5624 pod_ready.go:81] duration metric: took 404.4217ms for pod "kube-proxy-pzfjr" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:54.167413    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:54.364943    5624 request.go:629] Waited for 197.3068ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800
	I0429 11:49:54.365099    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800
	I0429 11:49:54.365099    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:54.365099    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:54.365099    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:54.369547    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:54.566485    5624 request.go:629] Waited for 195.4733ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:54.566695    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800
	I0429 11:49:54.566940    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:54.566940    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:54.566940    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:54.571353    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:54.572347    5624 pod_ready.go:92] pod "kube-scheduler-ha-437800" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:54.572347    5624 pod_ready.go:81] duration metric: took 404.9307ms for pod "kube-scheduler-ha-437800" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:54.572347    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:54.753063    5624 request.go:629] Waited for 180.5639ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800-m02
	I0429 11:49:54.753279    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800-m02
	I0429 11:49:54.753279    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:54.753279    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:54.753279    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:54.762860    5624 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 11:49:54.956613    5624 request.go:629] Waited for 192.5405ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:54.956866    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m02
	I0429 11:49:54.956866    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:54.956866    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:54.956866    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:54.961931    5624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 11:49:54.963366    5624 pod_ready.go:92] pod "kube-scheduler-ha-437800-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:54.963366    5624 pod_ready.go:81] duration metric: took 391.0157ms for pod "kube-scheduler-ha-437800-m02" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:54.963366    5624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:55.159572    5624 request.go:629] Waited for 195.7577ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800-m03
	I0429 11:49:55.159878    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-437800-m03
	I0429 11:49:55.160014    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:55.160014    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:55.160014    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:55.165654    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:55.362070    5624 request.go:629] Waited for 194.4042ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:55.362143    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes/ha-437800-m03
	I0429 11:49:55.362143    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:55.362143    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:55.362143    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:55.369968    5624 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 11:49:55.372630    5624 pod_ready.go:92] pod "kube-scheduler-ha-437800-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 11:49:55.372697    5624 pod_ready.go:81] duration metric: took 409.3285ms for pod "kube-scheduler-ha-437800-m03" in "kube-system" namespace to be "Ready" ...
	I0429 11:49:55.372777    5624 pod_ready.go:38] duration metric: took 5.2137759s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:49:55.372777    5624 api_server.go:52] waiting for apiserver process to appear ...
	I0429 11:49:55.387731    5624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 11:49:55.417751    5624 api_server.go:72] duration metric: took 18.2003297s to wait for apiserver process to appear ...
	I0429 11:49:55.417751    5624 api_server.go:88] waiting for apiserver healthz status ...
	I0429 11:49:55.417751    5624 api_server.go:253] Checking apiserver healthz at https://172.26.176.3:8443/healthz ...
	I0429 11:49:55.426551    5624 api_server.go:279] https://172.26.176.3:8443/healthz returned 200:
	ok
	I0429 11:49:55.427092    5624 round_trippers.go:463] GET https://172.26.176.3:8443/version
	I0429 11:49:55.427092    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:55.427092    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:55.427092    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:55.429067    5624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 11:49:55.429067    5624 api_server.go:141] control plane version: v1.30.0
	I0429 11:49:55.429067    5624 api_server.go:131] duration metric: took 11.3165ms to wait for apiserver health ...
	I0429 11:49:55.429067    5624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 11:49:55.565553    5624 request.go:629] Waited for 136.2445ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:49:55.565752    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:49:55.565752    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:55.565752    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:55.565752    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:55.577472    5624 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 11:49:55.590022    5624 system_pods.go:59] 24 kube-system pods found
	I0429 11:49:55.590022    5624 system_pods.go:61] "coredns-7db6d8ff4d-vvf4j" [cc00761a-60fb-4c04-9502-c0aa8b88e45a] Running
	I0429 11:49:55.590022    5624 system_pods.go:61] "coredns-7db6d8ff4d-zxvcx" [7f8c7504-7c8b-4d15-bcb0-63320257debc] Running
	I0429 11:49:55.590022    5624 system_pods.go:61] "etcd-ha-437800" [4c2ad87e-0a97-4414-bc1c-30c4d5d5b58f] Running
	I0429 11:49:55.590022    5624 system_pods.go:61] "etcd-ha-437800-m02" [9bd90d2f-eaff-4f49-acac-669292904ac9] Running
	I0429 11:49:55.590022    5624 system_pods.go:61] "etcd-ha-437800-m03" [fba838a1-ccbb-4d11-8f65-54f6a134946e] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kindnet-7cn9p" [7eb5ba76-640d-4092-abb9-dd1b95d5f39d] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kindnet-qg7qh" [cba63805-bae0-48e9-93b5-7ed38b14846f] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kindnet-qgbzf" [8e86dd3b-eb48-4bd5-a3f8-38f53d7c2bd8] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-apiserver-ha-437800" [21394aa6-39d0-40b0-9335-e618e86ccbd5] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-apiserver-ha-437800-m02" [167ef62e-bb21-4605-b821-f469de4aedf5] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-apiserver-ha-437800-m03" [8e35959a-f76f-4f30-8536-7205acdf70a1] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-controller-manager-ha-437800" [5233d18d-4b1a-4846-84c5-08043f05cd40] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-controller-manager-ha-437800-m02" [881ec6cd-768c-46f0-b10f-56f2a33172f3] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-controller-manager-ha-437800-m03" [370a7b65-2d41-4f57-8c9c-418e0ebc24cb] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-proxy-2tjfd" [ce4ffe20-47ae-438d-ad34-e2d2e06eda4f] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-proxy-hvzz9" [ea3045a9-bcea-4757-80a4-70361f030a6b] Running
	I0429 11:49:55.590611    5624 system_pods.go:61] "kube-proxy-pzfjr" [69ec7440-fd5b-4cee-8c37-e4a610b48570] Running
	I0429 11:49:55.590715    5624 system_pods.go:61] "kube-scheduler-ha-437800" [db1d725b-2fe3-4ff5-960d-48498bd58597] Running
	I0429 11:49:55.590715    5624 system_pods.go:61] "kube-scheduler-ha-437800-m02" [97b2e475-ff85-4601-8ded-f8e759fee82f] Running
	I0429 11:49:55.590715    5624 system_pods.go:61] "kube-scheduler-ha-437800-m03" [fde709a1-d79f-42fd-adf8-d2b60995c8f3] Running
	I0429 11:49:55.590715    5624 system_pods.go:61] "kube-vip-ha-437800" [b777794b-764c-42d5-8a96-2463488c0738] Running
	I0429 11:49:55.590747    5624 system_pods.go:61] "kube-vip-ha-437800-m02" [ed988926-35c5-4fb8-9e43-f50960fa81aa] Running
	I0429 11:49:55.590747    5624 system_pods.go:61] "kube-vip-ha-437800-m03" [5b4aa283-605d-45db-aaa4-cf75723a2870] Running
	I0429 11:49:55.590747    5624 system_pods.go:61] "storage-provisioner" [f3b60672-2de9-4a05-86cc-b3b7ed019410] Running
	I0429 11:49:55.590747    5624 system_pods.go:74] duration metric: took 161.6786ms to wait for pod list to return data ...
	I0429 11:49:55.590747    5624 default_sa.go:34] waiting for default service account to be created ...
	I0429 11:49:55.765505    5624 request.go:629] Waited for 174.7564ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/default/serviceaccounts
	I0429 11:49:55.765792    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/default/serviceaccounts
	I0429 11:49:55.765792    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:55.765792    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:55.765792    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:55.782371    5624 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 11:49:55.782975    5624 default_sa.go:45] found service account: "default"
	I0429 11:49:55.783036    5624 default_sa.go:55] duration metric: took 192.2262ms for default service account to be created ...
	I0429 11:49:55.783036    5624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 11:49:55.954809    5624 request.go:629] Waited for 171.4128ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:49:55.954895    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/namespaces/kube-system/pods
	I0429 11:49:55.954957    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:55.954957    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:55.954957    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:55.968748    5624 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 11:49:55.980658    5624 system_pods.go:86] 24 kube-system pods found
	I0429 11:49:55.980721    5624 system_pods.go:89] "coredns-7db6d8ff4d-vvf4j" [cc00761a-60fb-4c04-9502-c0aa8b88e45a] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "coredns-7db6d8ff4d-zxvcx" [7f8c7504-7c8b-4d15-bcb0-63320257debc] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "etcd-ha-437800" [4c2ad87e-0a97-4414-bc1c-30c4d5d5b58f] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "etcd-ha-437800-m02" [9bd90d2f-eaff-4f49-acac-669292904ac9] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "etcd-ha-437800-m03" [fba838a1-ccbb-4d11-8f65-54f6a134946e] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kindnet-7cn9p" [7eb5ba76-640d-4092-abb9-dd1b95d5f39d] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kindnet-qg7qh" [cba63805-bae0-48e9-93b5-7ed38b14846f] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kindnet-qgbzf" [8e86dd3b-eb48-4bd5-a3f8-38f53d7c2bd8] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-apiserver-ha-437800" [21394aa6-39d0-40b0-9335-e618e86ccbd5] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-apiserver-ha-437800-m02" [167ef62e-bb21-4605-b821-f469de4aedf5] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-apiserver-ha-437800-m03" [8e35959a-f76f-4f30-8536-7205acdf70a1] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-controller-manager-ha-437800" [5233d18d-4b1a-4846-84c5-08043f05cd40] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-controller-manager-ha-437800-m02" [881ec6cd-768c-46f0-b10f-56f2a33172f3] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-controller-manager-ha-437800-m03" [370a7b65-2d41-4f57-8c9c-418e0ebc24cb] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-proxy-2tjfd" [ce4ffe20-47ae-438d-ad34-e2d2e06eda4f] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-proxy-hvzz9" [ea3045a9-bcea-4757-80a4-70361f030a6b] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-proxy-pzfjr" [69ec7440-fd5b-4cee-8c37-e4a610b48570] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-scheduler-ha-437800" [db1d725b-2fe3-4ff5-960d-48498bd58597] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-scheduler-ha-437800-m02" [97b2e475-ff85-4601-8ded-f8e759fee82f] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-scheduler-ha-437800-m03" [fde709a1-d79f-42fd-adf8-d2b60995c8f3] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-vip-ha-437800" [b777794b-764c-42d5-8a96-2463488c0738] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-vip-ha-437800-m02" [ed988926-35c5-4fb8-9e43-f50960fa81aa] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "kube-vip-ha-437800-m03" [5b4aa283-605d-45db-aaa4-cf75723a2870] Running
	I0429 11:49:55.980721    5624 system_pods.go:89] "storage-provisioner" [f3b60672-2de9-4a05-86cc-b3b7ed019410] Running
	I0429 11:49:55.980721    5624 system_pods.go:126] duration metric: took 197.6827ms to wait for k8s-apps to be running ...
	I0429 11:49:55.980721    5624 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 11:49:55.995953    5624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:49:56.025882    5624 system_svc.go:56] duration metric: took 45.1616ms WaitForService to wait for kubelet
	I0429 11:49:56.026001    5624 kubeadm.go:576] duration metric: took 18.8086275s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 11:49:56.026001    5624 node_conditions.go:102] verifying NodePressure condition ...
	I0429 11:49:56.158833    5624 request.go:629] Waited for 132.7647ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.176.3:8443/api/v1/nodes
	I0429 11:49:56.158960    5624 round_trippers.go:463] GET https://172.26.176.3:8443/api/v1/nodes
	I0429 11:49:56.158960    5624 round_trippers.go:469] Request Headers:
	I0429 11:49:56.159043    5624 round_trippers.go:473]     Accept: application/json, */*
	I0429 11:49:56.159043    5624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 11:49:56.165344    5624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 11:49:56.167126    5624 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 11:49:56.167260    5624 node_conditions.go:123] node cpu capacity is 2
	I0429 11:49:56.167260    5624 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 11:49:56.167260    5624 node_conditions.go:123] node cpu capacity is 2
	I0429 11:49:56.167340    5624 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 11:49:56.167340    5624 node_conditions.go:123] node cpu capacity is 2
	I0429 11:49:56.167340    5624 node_conditions.go:105] duration metric: took 141.3382ms to run NodePressure ...
	I0429 11:49:56.167470    5624 start.go:240] waiting for startup goroutines ...
	I0429 11:49:56.167470    5624 start.go:254] writing updated cluster config ...
	I0429 11:49:56.182701    5624 ssh_runner.go:195] Run: rm -f paused
	I0429 11:49:56.343181    5624 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 11:49:56.346724    5624 out.go:177] * Done! kubectl is now configured to use "ha-437800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.550473987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.550496088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.550643088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.600847231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.601231032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.601432232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:42:11 ha-437800 dockerd[1322]: time="2024-04-29T11:42:11.601803833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:50:35 ha-437800 dockerd[1322]: time="2024-04-29T11:50:35.059624103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:50:35 ha-437800 dockerd[1322]: time="2024-04-29T11:50:35.060225905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:50:35 ha-437800 dockerd[1322]: time="2024-04-29T11:50:35.060252605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:50:35 ha-437800 dockerd[1322]: time="2024-04-29T11:50:35.060501005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:50:35 ha-437800 cri-dockerd[1222]: time="2024-04-29T11:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/683f67e5fac4a33e11059922b81272badb370df8d76464f94848a3495a78bf04/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 11:50:36 ha-437800 cri-dockerd[1222]: time="2024-04-29T11:50:36Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 11:50:36 ha-437800 dockerd[1322]: time="2024-04-29T11:50:36.912861006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 11:50:36 ha-437800 dockerd[1322]: time="2024-04-29T11:50:36.913033008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 11:50:36 ha-437800 dockerd[1322]: time="2024-04-29T11:50:36.913056208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:50:36 ha-437800 dockerd[1322]: time="2024-04-29T11:50:36.913192409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 11:51:40 ha-437800 dockerd[1316]: 2024/04/29 11:51:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 11:51:40 ha-437800 dockerd[1316]: 2024/04/29 11:51:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 11:51:40 ha-437800 dockerd[1316]: 2024/04/29 11:51:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 11:51:40 ha-437800 dockerd[1316]: 2024/04/29 11:51:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 11:51:40 ha-437800 dockerd[1316]: 2024/04/29 11:51:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 11:51:40 ha-437800 dockerd[1316]: 2024/04/29 11:51:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 11:51:40 ha-437800 dockerd[1316]: 2024/04/29 11:51:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 11:51:40 ha-437800 dockerd[1316]: 2024/04/29 11:51:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2d097abf5af66       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   683f67e5fac4a       busybox-fc5497c4f-kxn7k
	5a273ec673a42       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   75579f7022d4c       storage-provisioner
	7e21b812f1ccd       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   fca08318dd69f       coredns-7db6d8ff4d-vvf4j
	376e44d9bafd3       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   ccb6f28cb9dd6       coredns-7db6d8ff4d-zxvcx
	22e486515eda5       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago      Running             kindnet-cni               0                   701497cc8b03d       kindnet-qgbzf
	c6c05f014af2c       a0bf559e280cf                                                                                         26 minutes ago      Running             kube-proxy                0                   dd04e5743865e       kube-proxy-hvzz9
	d059ac8fe4753       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     27 minutes ago      Running             kube-vip                  0                   8db90fd8b8711       kube-vip-ha-437800
	2ff176e30ec62       259c8277fcbbc                                                                                         27 minutes ago      Running             kube-scheduler            0                   052d202dd54e8       kube-scheduler-ha-437800
	ad03ce97e2dbf       c42f13656d0b2                                                                                         27 minutes ago      Running             kube-apiserver            0                   d79e4ee79205f       kube-apiserver-ha-437800
	752b474aaa312       c7aad43836fa5                                                                                         27 minutes ago      Running             kube-controller-manager   0                   6a224fb51b215       kube-controller-manager-ha-437800
	0084f71d1910b       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   8f19761775907       etcd-ha-437800
	
	
	==> coredns [376e44d9bafd] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52120 - 62103 "HINFO IN 8895575928499902026.9047732300977096024. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.147201501s
	[INFO] 10.244.1.2:60060 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.198192674s
	[INFO] 10.244.1.2:50095 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.06663823s
	[INFO] 10.244.0.4:43561 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000108s
	[INFO] 10.244.2.2:51887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000224502s
	[INFO] 10.244.2.2:36346 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000105901s
	[INFO] 10.244.1.2:59078 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033344663s
	[INFO] 10.244.1.2:53712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000228402s
	[INFO] 10.244.1.2:52382 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102401s
	[INFO] 10.244.0.4:54042 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014075011s
	[INFO] 10.244.0.4:33766 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000088201s
	[INFO] 10.244.0.4:46993 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155201s
	[INFO] 10.244.2.2:38110 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126601s
	[INFO] 10.244.2.2:55803 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000642s
	[INFO] 10.244.2.2:43378 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156401s
	[INFO] 10.244.1.2:56619 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107701s
	[INFO] 10.244.1.2:42654 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114501s
	[INFO] 10.244.0.4:50355 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197901s
	[INFO] 10.244.0.4:56046 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000912s
	[INFO] 10.244.0.4:58870 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147302s
	[INFO] 10.244.2.2:48053 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154202s
	[INFO] 10.244.2.2:59663 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000332402s
	[INFO] 10.244.2.2:43598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000281102s
	[INFO] 10.244.2.2:38833 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000188801s
	
	
	==> coredns [7e21b812f1cc] <==
	[INFO] 10.244.0.4:42206 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194002s
	[INFO] 10.244.0.4:54465 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000206602s
	[INFO] 10.244.0.4:59891 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107101s
	[INFO] 10.244.0.4:34920 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103601s
	[INFO] 10.244.0.4:42536 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137102s
	[INFO] 10.244.2.2:39927 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091101s
	[INFO] 10.244.2.2:52442 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011267589s
	[INFO] 10.244.2.2:53077 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186702s
	[INFO] 10.244.2.2:58533 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112701s
	[INFO] 10.244.2.2:58677 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000226402s
	[INFO] 10.244.1.2:42446 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102201s
	[INFO] 10.244.1.2:50823 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063501s
	[INFO] 10.244.0.4:48975 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000355803s
	[INFO] 10.244.2.2:47577 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112901s
	[INFO] 10.244.2.2:45113 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000230802s
	[INFO] 10.244.1.2:50322 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124301s
	[INFO] 10.244.1.2:55709 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116601s
	[INFO] 10.244.1.2:49760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159002s
	[INFO] 10.244.1.2:46786 - 5 "PTR IN 1.176.26.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097401s
	[INFO] 10.244.0.4:33276 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166501s
	[INFO] 10.244.0.4:37027 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000285402s
	[INFO] 10.244.0.4:46102 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000330103s
	[INFO] 10.244.0.4:39295 - 5 "PTR IN 1.176.26.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000736s
	[INFO] 10.244.2.2:46024 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	[INFO] 10.244.2.2:36536 - 5 "PTR IN 1.176.26.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000141201s
	
	
	==> describe nodes <==
	Name:               ha-437800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-437800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=ha-437800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T11_41_44_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 11:41:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-437800
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:08:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:06:03 +0000   Mon, 29 Apr 2024 11:41:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:06:03 +0000   Mon, 29 Apr 2024 11:41:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:06:03 +0000   Mon, 29 Apr 2024 11:41:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:06:03 +0000   Mon, 29 Apr 2024 11:42:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.176.3
	  Hostname:    ha-437800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 83ce417c0fca49beb91fd5a5e984cb94
	  System UUID:                ec8c47e6-30d4-a345-98f2-580804f5da63
	  Boot ID:                    1b00c75c-57fc-4c53-9736-a168a0852459
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kxn7k              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-vvf4j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-zxvcx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-437800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-qgbzf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-437800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-437800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-hvzz9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-437800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-437800                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26m   kube-proxy       
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-437800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node ha-437800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node ha-437800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m   node-controller  Node ha-437800 event: Registered Node ha-437800 in Controller
	  Normal  NodeReady                26m   kubelet          Node ha-437800 status is now: NodeReady
	  Normal  RegisteredNode           22m   node-controller  Node ha-437800 event: Registered Node ha-437800 in Controller
	  Normal  RegisteredNode           19m   node-controller  Node ha-437800 event: Registered Node ha-437800 in Controller
	
	
	Name:               ha-437800-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-437800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=ha-437800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T11_45_44_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 11:45:37 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-437800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:07:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 12:06:03 +0000   Mon, 29 Apr 2024 12:08:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 12:06:03 +0000   Mon, 29 Apr 2024 12:08:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 12:06:03 +0000   Mon, 29 Apr 2024 12:08:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 12:06:03 +0000   Mon, 29 Apr 2024 12:08:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.26.185.80
	  Hostname:    ha-437800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c50764115df64038acb4443b3cae77d2
	  System UUID:                f0ff1baa-9620-b949-8541-c672e1b2a37d
	  Boot ID:                    22ec1ffd-e71a-47e6-b7d4-9f4db7535179
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dsnxf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-437800-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-qg7qh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-437800-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-437800-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-pzfjr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-437800-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-437800-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node ha-437800-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node ha-437800-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node ha-437800-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-437800-m02 event: Registered Node ha-437800-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node ha-437800-m02 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node ha-437800-m02 event: Registered Node ha-437800-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-437800-m02 event: Registered Node ha-437800-m02 in Controller
	  Normal  NodeNotReady             45s                node-controller  Node ha-437800-m02 status is now: NodeNotReady
	
	
	Name:               ha-437800-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-437800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=ha-437800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T11_49_36_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 11:49:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-437800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:08:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:06:19 +0000   Mon, 29 Apr 2024 11:49:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:06:19 +0000   Mon, 29 Apr 2024 11:49:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:06:19 +0000   Mon, 29 Apr 2024 11:49:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:06:19 +0000   Mon, 29 Apr 2024 11:49:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.177.113
	  Hostname:    ha-437800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b7bc4253463458e8279559d8bce36c3
	  System UUID:                78128ab4-98e9-ca40-b816-190967054531
	  Boot ID:                    fa1b1b92-c139-49e3-addb-77f8b4a64c8a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ndzvx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-437800-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-7cn9p                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-437800-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-437800-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-2tjfd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-437800-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-437800-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-437800-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-437800-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-437800-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-437800-m03 event: Registered Node ha-437800-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-437800-m03 event: Registered Node ha-437800-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-437800-m03 event: Registered Node ha-437800-m03 in Controller
	
	
	Name:               ha-437800-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-437800-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=ha-437800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T11_54_54_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 11:54:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-437800-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:08:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:05:35 +0000   Mon, 29 Apr 2024 11:54:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:05:35 +0000   Mon, 29 Apr 2024 11:54:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:05:35 +0000   Mon, 29 Apr 2024 11:54:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:05:35 +0000   Mon, 29 Apr 2024 11:55:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.187.66
	  Hostname:    ha-437800-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 853d3adf697e46d397496787c4126ab1
	  System UUID:                1b84b004-2569-504f-b0d3-635a04b355d1
	  Boot ID:                    428abdc7-1488-4c50-812d-21b36aa75efe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hcnh8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-proxy-72pxs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node ha-437800-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node ha-437800-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-437800-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-437800-m04 event: Registered Node ha-437800-m04 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-437800-m04 event: Registered Node ha-437800-m04 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-437800-m04 event: Registered Node ha-437800-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-437800-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.085462] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 11:40] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.185652] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Apr29 11:41] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.110755] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.599914] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	[  +0.238212] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.236895] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +2.836426] systemd-fstab-generator[1175]: Ignoring "noauto" option for root device
	[  +0.247498] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.219802] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.311576] systemd-fstab-generator[1214]: Ignoring "noauto" option for root device
	[ +11.766611] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.129929] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.873927] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	[  +7.075979] systemd-fstab-generator[1715]: Ignoring "noauto" option for root device
	[  +0.112089] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.917122] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.428342] systemd-fstab-generator[2211]: Ignoring "noauto" option for root device
	[ +15.695772] kauditd_printk_skb: 17 callbacks suppressed
	[Apr29 11:42] kauditd_printk_skb: 29 callbacks suppressed
	[Apr29 11:45] kauditd_printk_skb: 24 callbacks suppressed
	[Apr29 11:55] hrtimer: interrupt took 3712821 ns
	
	
	==> etcd [0084f71d1910] <==
	{"level":"warn","ts":"2024-04-29T12:08:56.873856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.878064Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.884932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.89158Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.906582Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.915982Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.925989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.934943Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.940881Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.945588Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.956931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.965473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.973541Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.977888Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.979146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.984136Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:56.995676Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:57.004745Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:57.019688Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:57.040564Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:57.046645Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:57.058167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:57.067887Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:57.07893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T12:08:57.079172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"717e02486ecd6145","from":"717e02486ecd6145","remote-peer-id":"28525c14e996a8fe","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:08:57 up 29 min,  0 users,  load average: 0.75, 0.45, 0.39
	Linux ha-437800 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [22e486515eda] <==
	I0429 12:08:19.537135       1 main.go:250] Node ha-437800-m04 has CIDR [10.244.3.0/24] 
	I0429 12:08:29.555234       1 main.go:223] Handling node with IPs: map[172.26.176.3:{}]
	I0429 12:08:29.555413       1 main.go:227] handling current node
	I0429 12:08:29.555430       1 main.go:223] Handling node with IPs: map[172.26.185.80:{}]
	I0429 12:08:29.555440       1 main.go:250] Node ha-437800-m02 has CIDR [10.244.1.0/24] 
	I0429 12:08:29.555661       1 main.go:223] Handling node with IPs: map[172.26.177.113:{}]
	I0429 12:08:29.555759       1 main.go:250] Node ha-437800-m03 has CIDR [10.244.2.0/24] 
	I0429 12:08:29.556122       1 main.go:223] Handling node with IPs: map[172.26.187.66:{}]
	I0429 12:08:29.556395       1 main.go:250] Node ha-437800-m04 has CIDR [10.244.3.0/24] 
	I0429 12:08:39.572169       1 main.go:223] Handling node with IPs: map[172.26.176.3:{}]
	I0429 12:08:39.572232       1 main.go:227] handling current node
	I0429 12:08:39.572246       1 main.go:223] Handling node with IPs: map[172.26.185.80:{}]
	I0429 12:08:39.572254       1 main.go:250] Node ha-437800-m02 has CIDR [10.244.1.0/24] 
	I0429 12:08:39.572423       1 main.go:223] Handling node with IPs: map[172.26.177.113:{}]
	I0429 12:08:39.572598       1 main.go:250] Node ha-437800-m03 has CIDR [10.244.2.0/24] 
	I0429 12:08:39.572709       1 main.go:223] Handling node with IPs: map[172.26.187.66:{}]
	I0429 12:08:39.572720       1 main.go:250] Node ha-437800-m04 has CIDR [10.244.3.0/24] 
	I0429 12:08:49.582209       1 main.go:223] Handling node with IPs: map[172.26.176.3:{}]
	I0429 12:08:49.582333       1 main.go:227] handling current node
	I0429 12:08:49.582381       1 main.go:223] Handling node with IPs: map[172.26.185.80:{}]
	I0429 12:08:49.582390       1 main.go:250] Node ha-437800-m02 has CIDR [10.244.1.0/24] 
	I0429 12:08:49.582821       1 main.go:223] Handling node with IPs: map[172.26.177.113:{}]
	I0429 12:08:49.582915       1 main.go:250] Node ha-437800-m03 has CIDR [10.244.2.0/24] 
	I0429 12:08:49.583221       1 main.go:223] Handling node with IPs: map[172.26.187.66:{}]
	I0429 12:08:49.583312       1 main.go:250] Node ha-437800-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [ad03ce97e2db] <==
	Trace[270084416]: ["GuaranteedUpdate etcd3" audit-id:8b5837d8-9673-4542-ac59-7ebc0e16d402,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 552ms (12:08:00.751)
	Trace[270084416]:  ---"Txn call completed" 550ms (12:08:01.303)]
	Trace[270084416]: [552.480014ms] [552.480014ms] END
	I0429 12:08:01.791720       1 trace.go:236] Trace[2042937976]: "Get" accept:application/json, */*,audit-id:1aa89527-0c31-40c4-9794-17e694403911,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (29-Apr-2024 12:08:00.950) (total time: 841ms):
	Trace[2042937976]: ---"About to write a response" 840ms (12:08:01.790)
	Trace[2042937976]: [841.593712ms] [841.593712ms] END
	I0429 12:08:02.553170       1 trace.go:236] Trace[1059014809]: "Update" accept:application/json, */*,audit-id:b7a7eaa0-7f57-48be-b3b7-c36cf765a6c0,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 12:08:01.794) (total time: 758ms):
	Trace[1059014809]: ["GuaranteedUpdate etcd3" audit-id:b7a7eaa0-7f57-48be-b3b7-c36cf765a6c0,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 758ms (12:08:01.794)
	Trace[1059014809]:  ---"Txn call completed" 757ms (12:08:02.552)]
	Trace[1059014809]: [758.604853ms] [758.604853ms] END
	I0429 12:08:03.154096       1 trace.go:236] Trace[1205331761]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.26.176.3,type:*v1.Endpoints,resource:apiServerIPInfo (29-Apr-2024 12:08:01.767) (total time: 1387ms):
	Trace[1205331761]: ---"Transaction prepared" 757ms (12:08:02.554)
	Trace[1205331761]: ---"Txn call completed" 599ms (12:08:03.154)
	Trace[1205331761]: [1.387003503s] [1.387003503s] END
	I0429 12:08:04.140770       1 trace.go:236] Trace[1594776158]: "Get" accept:application/json, */*,audit-id:4ffdd650-63b4-4096-b7f3-f84e988d910f,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (29-Apr-2024 12:08:03.564) (total time: 576ms):
	Trace[1594776158]: ---"About to write a response" 575ms (12:08:04.140)
	Trace[1594776158]: [576.045985ms] [576.045985ms] END
	I0429 12:08:04.699681       1 trace.go:236] Trace[225210997]: "Update" accept:application/json, */*,audit-id:4921ae2a-9d18-496a-bc59-04527837d5b6,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 12:08:04.143) (total time: 555ms):
	Trace[225210997]: ["GuaranteedUpdate etcd3" audit-id:4921ae2a-9d18-496a-bc59-04527837d5b6,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 555ms (12:08:04.144)
	Trace[225210997]:  ---"Txn call completed" 551ms (12:08:04.696)]
	Trace[225210997]: [555.787922ms] [555.787922ms] END
	I0429 12:08:06.852044       1 trace.go:236] Trace[1605471106]: "Update" accept:application/json, */*,audit-id:ea15da7c-1f1b-444c-9e9b-edb5e1f6b787,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 12:08:05.906) (total time: 945ms):
	Trace[1605471106]: ["GuaranteedUpdate etcd3" audit-id:ea15da7c-1f1b-444c-9e9b-edb5e1f6b787,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 944ms (12:08:05.907)
	Trace[1605471106]:  ---"Txn call completed" 943ms (12:08:06.851)]
	Trace[1605471106]: [945.118327ms] [945.118327ms] END
	
	
	==> kube-controller-manager [752b474aaa31] <==
	I0429 11:49:31.697835       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-437800-m03"
	I0429 11:50:34.073588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="120.745442ms"
	I0429 11:50:34.129710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.938212ms"
	I0429 11:50:34.226197       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.408993ms"
	I0429 11:50:34.420194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="193.864688ms"
	I0429 11:50:34.775451       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="355.212012ms"
	I0429 11:50:34.830992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.325711ms"
	I0429 11:50:34.831715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="452.601µs"
	I0429 11:50:34.832057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="234.701µs"
	I0429 11:50:34.948760       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.009964ms"
	I0429 11:50:34.952585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.5µs"
	I0429 11:50:37.186226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.972875ms"
	I0429 11:50:37.188871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.1µs"
	I0429 11:50:37.423949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.911934ms"
	I0429 11:50:37.424625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="507.404µs"
	I0429 11:50:37.747262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.078386ms"
	I0429 11:50:37.747806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="328.502µs"
	E0429 11:54:53.063759       1 certificate_controller.go:146] Sync csr-lqvmd failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-lqvmd": the object has been modified; please apply your changes to the latest version and try again
	I0429 11:54:53.137759       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-437800-m04\" does not exist"
	I0429 11:54:53.181100       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-437800-m04" podCIDRs=["10.244.3.0/24"]
	I0429 11:54:56.791542       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-437800-m04"
	I0429 11:55:15.784020       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-437800-m04"
	I0429 12:08:11.526115       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-437800-m04"
	I0429 12:08:11.632154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.261757ms"
	I0429 12:08:11.633093       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="485.502µs"
	
	
	==> kube-proxy [c6c05f014af2] <==
	I0429 11:41:59.396774       1 server_linux.go:69] "Using iptables proxy"
	I0429 11:41:59.434801       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.26.176.3"]
	I0429 11:41:59.493135       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 11:41:59.493254       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 11:41:59.493279       1 server_linux.go:165] "Using iptables Proxier"
	I0429 11:41:59.500453       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 11:41:59.501578       1 server.go:872] "Version info" version="v1.30.0"
	I0429 11:41:59.501731       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 11:41:59.505744       1 config.go:192] "Starting service config controller"
	I0429 11:41:59.506465       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 11:41:59.506814       1 config.go:101] "Starting endpoint slice config controller"
	I0429 11:41:59.506976       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 11:41:59.511510       1 config.go:319] "Starting node config controller"
	I0429 11:41:59.511761       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 11:41:59.607430       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 11:41:59.607438       1 shared_informer.go:320] Caches are synced for service config
	I0429 11:41:59.612839       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2ff176e30ec6] <==
	E0429 11:41:40.758437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 11:41:40.804050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 11:41:40.804681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0429 11:41:43.293317       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 11:50:34.076914       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dsnxf\": pod busybox-fc5497c4f-dsnxf is already assigned to node \"ha-437800-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-dsnxf" node="ha-437800-m02"
	E0429 11:50:34.078546       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dsnxf\": pod busybox-fc5497c4f-dsnxf is already assigned to node \"ha-437800-m02\"" pod="default/busybox-fc5497c4f-dsnxf"
	E0429 11:50:34.079618       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kxn7k\": pod busybox-fc5497c4f-kxn7k is already assigned to node \"ha-437800\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-kxn7k" node="ha-437800"
	E0429 11:50:34.079836       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7deeaa5b-a8bf-4ba8-b7d4-48507f9a1df0(default/busybox-fc5497c4f-kxn7k) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-kxn7k"
	E0429 11:50:34.079871       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kxn7k\": pod busybox-fc5497c4f-kxn7k is already assigned to node \"ha-437800\"" pod="default/busybox-fc5497c4f-kxn7k"
	I0429 11:50:34.079901       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-kxn7k" node="ha-437800"
	E0429 11:54:53.264313       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hcnh8\": pod kindnet-hcnh8 is already assigned to node \"ha-437800-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hcnh8" node="ha-437800-m04"
	E0429 11:54:53.265005       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hcnh8\": pod kindnet-hcnh8 is already assigned to node \"ha-437800-m04\"" pod="kube-system/kindnet-hcnh8"
	I0429 11:54:53.265284       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hcnh8" node="ha-437800-m04"
	E0429 11:54:53.268714       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-72pxs\": pod kube-proxy-72pxs is already assigned to node \"ha-437800-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-72pxs" node="ha-437800-m04"
	E0429 11:54:53.268875       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod fd53b9aa-91d2-4e14-a8c2-eb859b577b2b(kube-system/kube-proxy-72pxs) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-72pxs"
	E0429 11:54:53.269123       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-72pxs\": pod kube-proxy-72pxs is already assigned to node \"ha-437800-m04\"" pod="kube-system/kube-proxy-72pxs"
	I0429 11:54:53.269443       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-72pxs" node="ha-437800-m04"
	E0429 11:54:53.602150       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-cfkmb\": pod kube-proxy-cfkmb is already assigned to node \"ha-437800-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-cfkmb" node="ha-437800-m04"
	E0429 11:54:53.602228       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3e2d9f6a-781b-4af0-9ebd-5dabf0c5ce51(kube-system/kube-proxy-cfkmb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-cfkmb"
	E0429 11:54:53.602249       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-cfkmb\": pod kube-proxy-cfkmb is already assigned to node \"ha-437800-m04\"" pod="kube-system/kube-proxy-cfkmb"
	I0429 11:54:53.602271       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-cfkmb" node="ha-437800-m04"
	E0429 11:54:53.602654       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2bh7g\": pod kindnet-2bh7g is already assigned to node \"ha-437800-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2bh7g" node="ha-437800-m04"
	E0429 11:54:53.602855       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ee3a2f56-b7b3-4ec9-952f-4d45f70db417(kube-system/kindnet-2bh7g) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2bh7g"
	E0429 11:54:53.603134       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2bh7g\": pod kindnet-2bh7g is already assigned to node \"ha-437800-m04\"" pod="kube-system/kindnet-2bh7g"
	I0429 11:54:53.603304       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2bh7g" node="ha-437800-m04"
	
	
	==> kubelet <==
	Apr 29 12:04:43 ha-437800 kubelet[2218]: E0429 12:04:43.406521    2218 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:04:43 ha-437800 kubelet[2218]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:04:43 ha-437800 kubelet[2218]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:04:43 ha-437800 kubelet[2218]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:04:43 ha-437800 kubelet[2218]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:05:43 ha-437800 kubelet[2218]: E0429 12:05:43.406906    2218 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:05:43 ha-437800 kubelet[2218]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:05:43 ha-437800 kubelet[2218]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:05:43 ha-437800 kubelet[2218]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:05:43 ha-437800 kubelet[2218]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:06:43 ha-437800 kubelet[2218]: E0429 12:06:43.405998    2218 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:06:43 ha-437800 kubelet[2218]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:06:43 ha-437800 kubelet[2218]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:06:43 ha-437800 kubelet[2218]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:06:43 ha-437800 kubelet[2218]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:07:43 ha-437800 kubelet[2218]: E0429 12:07:43.405576    2218 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:07:43 ha-437800 kubelet[2218]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:07:43 ha-437800 kubelet[2218]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:07:43 ha-437800 kubelet[2218]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:07:43 ha-437800 kubelet[2218]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:08:43 ha-437800 kubelet[2218]: E0429 12:08:43.409397    2218 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:08:43 ha-437800 kubelet[2218]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:08:43 ha-437800 kubelet[2218]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:08:43 ha-437800 kubelet[2218]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:08:43 ha-437800 kubelet[2218]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 12:08:48.830991    9856 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-437800 -n ha-437800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-437800 -n ha-437800: (12.3079767s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-437800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (39.44s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (187.67s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-694400
E0429 12:37:24.775560    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-2-694400: exit status 90 (2m55.8452473s)

                                                
                                                
-- stdout --
	* [mount-start-2-694400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting minikube without Kubernetes in cluster mount-start-2-694400
	* Restarting existing hyperv VM for "mount-start-2-694400" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 12:37:12.983305    2932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 12:38:39 mount-start-2-694400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 12:38:39 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:38:39.872966456Z" level=info msg="Starting up"
	Apr 29 12:38:39 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:38:39.874245700Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 12:38:39 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:38:39.878075633Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.909944142Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.939171567Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.939283862Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.939441755Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.939462955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.939870237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.939961633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.940130825Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.940225421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.940246420Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.940257620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.940716000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.941303674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.944389240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.944508634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.945002613Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.945478592Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.946106065Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.946223560Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.946242959Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.948334367Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.948626155Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.948798147Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.948901643Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.948929342Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.950147288Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.950622768Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.950867357Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.950950553Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.951018150Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.951098447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.951453831Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.951783317Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952005807Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952166800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952190099Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952282695Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952309394Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952409090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952433989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952449888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952465687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952704277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952824272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952846571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952864070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952879869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952898468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952912668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952927167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952941466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952959266Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952983765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.952998764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.953012263Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.953069561Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.953090260Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.953102759Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.953115959Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.953220454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.953239553Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.953252553Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.953588038Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.953647236Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.953757031Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 12:38:39 mount-start-2-694400 dockerd[668]: time="2024-04-29T12:38:39.953787430Z" level=info msg="containerd successfully booted in 0.046358s"
	Apr 29 12:38:40 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:38:40.939105470Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 12:38:40 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:38:40.969761216Z" level=info msg="Loading containers: start."
	Apr 29 12:38:41 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:38:41.212280075Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 12:38:41 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:38:41.299543207Z" level=info msg="Loading containers: done."
	Apr 29 12:38:41 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:38:41.324610474Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 12:38:41 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:38:41.325241681Z" level=info msg="Daemon has completed initialization"
	Apr 29 12:38:41 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:38:41.387972850Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 12:38:41 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:38:41.389139063Z" level=info msg="API listen on [::]:2376"
	Apr 29 12:38:41 mount-start-2-694400 systemd[1]: Started Docker Application Container Engine.
	Apr 29 12:39:07 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:39:07.491005636Z" level=info msg="Processing signal 'terminated'"
	Apr 29 12:39:07 mount-start-2-694400 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 12:39:07 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:39:07.493490057Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 12:39:07 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:39:07.493645865Z" level=info msg="Daemon shutdown complete"
	Apr 29 12:39:07 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:39:07.493765371Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 12:39:07 mount-start-2-694400 dockerd[662]: time="2024-04-29T12:39:07.493799773Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 12:39:08 mount-start-2-694400 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 12:39:08 mount-start-2-694400 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 12:39:08 mount-start-2-694400 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 12:39:08 mount-start-2-694400 dockerd[1040]: time="2024-04-29T12:39:08.575915112Z" level=info msg="Starting up"
	Apr 29 12:40:08 mount-start-2-694400 dockerd[1040]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 12:40:08 mount-start-2-694400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 12:40:08 mount-start-2-694400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 12:40:08 mount-start-2-694400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:168: restart failed: "out/minikube-windows-amd64.exe start -p mount-start-2-694400" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-694400 -n mount-start-2-694400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-694400 -n mount-start-2-694400: exit status 6 (11.8178401s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 12:40:08.830929    9628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0429 12:40:20.463669    9628 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-694400" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-694400" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/RestartStopped (187.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (57.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- exec busybox-fc5497c4f-gr44t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0429 12:48:47.993780    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- exec busybox-fc5497c4f-gr44t -- sh -c "ping -c 1 172.26.176.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- exec busybox-fc5497c4f-gr44t -- sh -c "ping -c 1 172.26.176.1": exit status 1 (10.5205823s)

                                                
                                                
-- stdout --
	PING 172.26.176.1 (172.26.176.1): 56 data bytes
	
	--- 172.26.176.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 12:48:48.576071    3872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.26.176.1) from pod (busybox-fc5497c4f-gr44t): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- exec busybox-fc5497c4f-xvm2v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- exec busybox-fc5497c4f-xvm2v -- sh -c "ping -c 1 172.26.176.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- exec busybox-fc5497c4f-xvm2v -- sh -c "ping -c 1 172.26.176.1": exit status 1 (10.5326606s)

                                                
                                                
-- stdout --
	PING 172.26.176.1 (172.26.176.1): 56 data bytes
	
	--- 172.26.176.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 12:48:59.671780    1868 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.26.176.1) from pod (busybox-fc5497c4f-xvm2v): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-409200 -n multinode-409200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-409200 -n multinode-409200: (12.2527893s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 logs -n 25: (8.6675371s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-694400                           | mount-start-2-694400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:33 UTC | 29 Apr 24 12:35 UTC |
	|         | --memory=2048 --mount                             |                      |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |                   |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-694400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:35 UTC |                     |
	|         | --profile mount-start-2-694400 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-694400 ssh -- ls                    | mount-start-2-694400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:35 UTC | 29 Apr 24 12:36 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-694400                           | mount-start-1-694400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:36 UTC | 29 Apr 24 12:36 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-694400 ssh -- ls                    | mount-start-2-694400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:36 UTC | 29 Apr 24 12:36 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-694400                           | mount-start-2-694400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:36 UTC | 29 Apr 24 12:37 UTC |
	| start   | -p mount-start-2-694400                           | mount-start-2-694400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:37 UTC |                     |
	| delete  | -p mount-start-2-694400                           | mount-start-2-694400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:40 UTC | 29 Apr 24 12:41 UTC |
	| delete  | -p mount-start-1-694400                           | mount-start-1-694400 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:41 UTC | 29 Apr 24 12:41 UTC |
	| start   | -p multinode-409200                               | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:41 UTC | 29 Apr 24 12:48 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- apply -f                   | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC | 29 Apr 24 12:48 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- rollout                    | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC | 29 Apr 24 12:48 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- get pods -o                | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC | 29 Apr 24 12:48 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- get pods -o                | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC | 29 Apr 24 12:48 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- exec                       | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC | 29 Apr 24 12:48 UTC |
	|         | busybox-fc5497c4f-gr44t --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- exec                       | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC | 29 Apr 24 12:48 UTC |
	|         | busybox-fc5497c4f-xvm2v --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- exec                       | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC | 29 Apr 24 12:48 UTC |
	|         | busybox-fc5497c4f-gr44t --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- exec                       | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC | 29 Apr 24 12:48 UTC |
	|         | busybox-fc5497c4f-xvm2v --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- exec                       | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC | 29 Apr 24 12:48 UTC |
	|         | busybox-fc5497c4f-gr44t -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- exec                       | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC | 29 Apr 24 12:48 UTC |
	|         | busybox-fc5497c4f-xvm2v -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- get pods -o                | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC | 29 Apr 24 12:48 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- exec                       | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC | 29 Apr 24 12:48 UTC |
	|         | busybox-fc5497c4f-gr44t                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- exec                       | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC |                     |
	|         | busybox-fc5497c4f-gr44t -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.26.176.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- exec                       | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC | 29 Apr 24 12:48 UTC |
	|         | busybox-fc5497c4f-xvm2v                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-409200 -- exec                       | multinode-409200     | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:48 UTC |                     |
	|         | busybox-fc5497c4f-xvm2v -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.26.176.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:41:24
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:41:24.071859    3296 out.go:291] Setting OutFile to fd 1376 ...
	I0429 12:41:24.072685    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:41:24.072685    3296 out.go:304] Setting ErrFile to fd 1392...
	I0429 12:41:24.072685    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:41:24.098316    3296 out.go:298] Setting JSON to false
	I0429 12:41:24.101035    3296 start.go:129] hostinfo: {"hostname":"minikube6","uptime":35956,"bootTime":1714358527,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 12:41:24.102029    3296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 12:41:24.108002    3296 out.go:177] * [multinode-409200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 12:41:24.112063    3296 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 12:41:24.112063    3296 notify.go:220] Checking for updates...
	I0429 12:41:24.115983    3296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:41:24.117816    3296 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 12:41:24.120931    3296 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 12:41:24.123137    3296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:41:24.126348    3296 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:41:24.126348    3296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:41:29.691313    3296 out.go:177] * Using the hyperv driver based on user configuration
	I0429 12:41:29.694806    3296 start.go:297] selected driver: hyperv
	I0429 12:41:29.694898    3296 start.go:901] validating driver "hyperv" against <nil>
	I0429 12:41:29.694898    3296 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:41:29.750099    3296 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:41:29.750905    3296 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:41:29.751474    3296 cni.go:84] Creating CNI manager for ""
	I0429 12:41:29.751474    3296 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 12:41:29.751474    3296 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 12:41:29.751786    3296 start.go:340] cluster config:
	{Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:41:29.751786    3296 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:41:29.758943    3296 out.go:177] * Starting "multinode-409200" primary control-plane node in "multinode-409200" cluster
	I0429 12:41:29.762713    3296 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:41:29.762713    3296 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 12:41:29.762713    3296 cache.go:56] Caching tarball of preloaded images
	I0429 12:41:29.762713    3296 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:41:29.763382    3296 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:41:29.763583    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 12:41:29.763583    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json: {Name:mkf8183664b98a8e3f56b1e9ae3d2d10f3e06326 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:41:29.764783    3296 start.go:360] acquireMachinesLock for multinode-409200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:41:29.765361    3296 start.go:364] duration metric: took 537.1µs to acquireMachinesLock for "multinode-409200"
	I0429 12:41:29.765392    3296 start.go:93] Provisioning new machine with config: &{Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:41:29.765392    3296 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 12:41:29.769708    3296 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:41:29.769909    3296 start.go:159] libmachine.API.Create for "multinode-409200" (driver="hyperv")
	I0429 12:41:29.769909    3296 client.go:168] LocalClient.Create starting
	I0429 12:41:29.770627    3296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 12:41:29.770627    3296 main.go:141] libmachine: Decoding PEM data...
	I0429 12:41:29.770627    3296 main.go:141] libmachine: Parsing certificate...
	I0429 12:41:29.770627    3296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 12:41:29.771491    3296 main.go:141] libmachine: Decoding PEM data...
	I0429 12:41:29.771491    3296 main.go:141] libmachine: Parsing certificate...
	I0429 12:41:29.771491    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 12:41:31.935654    3296 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 12:41:31.936054    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:31.936157    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 12:41:33.722970    3296 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 12:41:33.722970    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:33.723246    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 12:41:35.256783    3296 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 12:41:35.256783    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:35.257935    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 12:41:38.873395    3296 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 12:41:38.873395    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:38.876182    3296 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:41:39.387931    3296 main.go:141] libmachine: Creating SSH key...
	I0429 12:41:39.546045    3296 main.go:141] libmachine: Creating VM...
	I0429 12:41:39.546173    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 12:41:42.449474    3296 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 12:41:42.449474    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:42.449474    3296 main.go:141] libmachine: Using switch "Default Switch"
	I0429 12:41:42.452970    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 12:41:44.272448    3296 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 12:41:44.273105    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:44.273105    3296 main.go:141] libmachine: Creating VHD
	I0429 12:41:44.273238    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 12:41:47.975205    3296 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F97E0AA5-FA51-469C-8B71-A632009B8D6A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 12:41:47.976124    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:47.976156    3296 main.go:141] libmachine: Writing magic tar header
	I0429 12:41:47.976156    3296 main.go:141] libmachine: Writing SSH key tar header
	I0429 12:41:47.986236    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 12:41:51.133276    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:41:51.133501    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:51.133501    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\disk.vhd' -SizeBytes 20000MB
	I0429 12:41:53.639402    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:41:53.639402    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:53.640023    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-409200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 12:41:57.400298    3296 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-409200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 12:41:57.400573    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:57.400573    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-409200 -DynamicMemoryEnabled $false
	I0429 12:41:59.692823    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:41:59.692823    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:59.692823    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-409200 -Count 2
	I0429 12:42:01.887698    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:01.887698    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:01.887837    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-409200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\boot2docker.iso'
	I0429 12:42:04.494718    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:04.495429    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:04.495717    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-409200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\disk.vhd'
	I0429 12:42:07.155562    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:07.155562    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:07.155562    3296 main.go:141] libmachine: Starting VM...
	I0429 12:42:07.155562    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-409200
	I0429 12:42:10.190311    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:10.190311    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:10.191327    3296 main.go:141] libmachine: Waiting for host to start...
	I0429 12:42:10.191327    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:12.498310    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:12.499114    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:12.499174    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:15.123193    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:15.123539    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:16.125896    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:18.339192    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:18.339192    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:18.339501    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:20.940949    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:20.940949    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:21.943094    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:24.162676    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:24.162676    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:24.162828    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:26.695989    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:26.696067    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:27.696767    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:29.911560    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:29.912251    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:29.912399    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:32.458187    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:32.458475    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:33.461544    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:35.662693    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:35.662936    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:35.663029    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:38.281947    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:42:38.282170    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:38.282170    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:40.427474    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:40.427474    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:40.427552    3296 machine.go:94] provisionDockerMachine start ...
	I0429 12:42:40.427698    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:42.651318    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:42.651847    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:42.651847    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:45.312059    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:42:45.312723    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:45.319337    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:42:45.332543    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:42:45.332543    3296 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 12:42:45.454107    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 12:42:45.454107    3296 buildroot.go:166] provisioning hostname "multinode-409200"
	I0429 12:42:45.454107    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:47.620360    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:47.620360    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:47.620360    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:50.273859    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:42:50.273859    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:50.282260    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:42:50.282787    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:42:50.283030    3296 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-409200 && echo "multinode-409200" | sudo tee /etc/hostname
	I0429 12:42:50.450202    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-409200
	
	I0429 12:42:50.450202    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:52.617897    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:52.617980    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:52.617980    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:55.249656    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:42:55.250533    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:55.254727    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:42:55.255866    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:42:55.255866    3296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-409200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-409200/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-409200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:42:55.394645    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:42:55.394645    3296 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 12:42:55.394645    3296 buildroot.go:174] setting up certificates
	I0429 12:42:55.394645    3296 provision.go:84] configureAuth start
	I0429 12:42:55.394645    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:57.543276    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:57.543276    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:57.543379    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:00.118356    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:00.119200    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:00.119200    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:02.260662    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:02.261622    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:02.261691    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:04.839372    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:04.839909    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:04.839909    3296 provision.go:143] copyHostCerts
	I0429 12:43:04.839909    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 12:43:04.839909    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 12:43:04.839909    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 12:43:04.840902    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 12:43:04.841954    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 12:43:04.842681    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 12:43:04.842681    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 12:43:04.842890    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 12:43:04.844022    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 12:43:04.844108    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 12:43:04.844108    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 12:43:04.844646    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 12:43:04.845317    3296 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-409200 san=[127.0.0.1 172.26.185.116 localhost minikube multinode-409200]
	I0429 12:43:05.203469    3296 provision.go:177] copyRemoteCerts
	I0429 12:43:05.217479    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:43:05.217479    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:07.318983    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:07.318983    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:07.319302    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:09.898054    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:09.898054    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:09.898952    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:43:09.997063    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7795466s)
	I0429 12:43:09.997124    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 12:43:09.997764    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 12:43:10.047385    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 12:43:10.047970    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 12:43:10.097809    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 12:43:10.098469    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 12:43:10.147392    3296 provision.go:87] duration metric: took 14.752632s to configureAuth
	I0429 12:43:10.147544    3296 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:43:10.148090    3296 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:43:10.148180    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:12.343126    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:12.343461    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:12.343550    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:14.975410    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:14.975410    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:14.981555    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:43:14.982278    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:43:14.982278    3296 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:43:15.110028    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:43:15.110028    3296 buildroot.go:70] root file system type: tmpfs
	I0429 12:43:15.110028    3296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:43:15.110028    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:17.280970    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:17.280970    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:17.281792    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:19.907151    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:19.907292    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:19.913390    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:43:19.914028    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:43:19.914121    3296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:43:20.069774    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:43:20.069774    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:22.193863    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:22.194959    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:22.194995    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:24.736866    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:24.736866    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:24.744211    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:43:24.744211    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:43:24.744211    3296 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:43:26.935989    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:43:26.935989    3296 machine.go:97] duration metric: took 46.5080745s to provisionDockerMachine
	I0429 12:43:26.935989    3296 client.go:171] duration metric: took 1m57.1651667s to LocalClient.Create
	I0429 12:43:26.935989    3296 start.go:167] duration metric: took 1m57.1651667s to libmachine.API.Create "multinode-409200"
	I0429 12:43:26.935989    3296 start.go:293] postStartSetup for "multinode-409200" (driver="hyperv")
	I0429 12:43:26.936526    3296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:43:26.950981    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:43:26.950981    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:29.014332    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:29.014507    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:29.014590    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:31.564952    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:31.564952    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:31.565713    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:43:31.666721    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7157042s)
	I0429 12:43:31.680632    3296 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:43:31.688804    3296 command_runner.go:130] > NAME=Buildroot
	I0429 12:43:31.688804    3296 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 12:43:31.688804    3296 command_runner.go:130] > ID=buildroot
	I0429 12:43:31.688804    3296 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 12:43:31.688804    3296 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 12:43:31.688804    3296 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:43:31.688804    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 12:43:31.689611    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 12:43:31.690672    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 12:43:31.690778    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 12:43:31.703066    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:43:31.729604    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 12:43:31.780229    3296 start.go:296] duration metric: took 4.844202s for postStartSetup
	I0429 12:43:31.784136    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:33.908553    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:33.909388    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:33.909388    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:36.459415    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:36.459415    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:36.460347    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 12:43:36.463566    3296 start.go:128] duration metric: took 2m6.6971858s to createHost
	I0429 12:43:36.463729    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:38.546973    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:38.546973    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:38.548012    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:41.045793    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:41.045793    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:41.054379    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:43:41.055273    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:43:41.055273    3296 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:43:41.191523    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714394621.194598920
	
	I0429 12:43:41.192059    3296 fix.go:216] guest clock: 1714394621.194598920
	I0429 12:43:41.192059    3296 fix.go:229] Guest: 2024-04-29 12:43:41.19459892 +0000 UTC Remote: 2024-04-29 12:43:36.4636493 +0000 UTC m=+132.586901101 (delta=4.73094962s)
	I0429 12:43:41.192228    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:43.353158    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:43.353158    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:43.353419    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:45.947951    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:45.947951    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:45.954725    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:43:45.955446    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:43:45.955446    3296 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714394621
	I0429 12:43:46.089226    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 12:43:41 UTC 2024
	
	I0429 12:43:46.089226    3296 fix.go:236] clock set: Mon Apr 29 12:43:41 UTC 2024
	 (err=<nil>)
	I0429 12:43:46.089226    3296 start.go:83] releasing machines lock for "multinode-409200", held for 2m16.3227711s
	I0429 12:43:46.089226    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:48.230317    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:48.230419    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:48.230483    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:50.802602    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:50.802840    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:50.807572    3296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:43:50.807716    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:50.825417    3296 ssh_runner.go:195] Run: cat /version.json
	I0429 12:43:50.825524    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:53.002688    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:53.002968    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:53.002968    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:53.050817    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:53.051493    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:53.051493    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:55.699871    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:55.699967    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:55.700043    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:43:55.724091    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:55.724091    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:55.724091    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:43:55.795365    3296 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 12:43:55.795521    3296 ssh_runner.go:235] Completed: cat /version.json: (4.9699587s)
	I0429 12:43:55.813261    3296 ssh_runner.go:195] Run: systemctl --version
	I0429 12:43:55.899405    3296 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 12:43:55.899405    3296 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0917933s)
	I0429 12:43:55.899405    3296 command_runner.go:130] > systemd 252 (252)
	I0429 12:43:55.899552    3296 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 12:43:55.912945    3296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 12:43:55.922181    3296 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 12:43:55.922699    3296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:43:55.936091    3296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:43:55.965604    3296 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 12:43:55.966233    3296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:43:55.966284    3296 start.go:494] detecting cgroup driver to use...
	I0429 12:43:55.966319    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:43:56.001475    3296 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 12:43:56.015262    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:43:56.047945    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:43:56.070384    3296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:43:56.083490    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:43:56.120537    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:43:56.154883    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:43:56.188316    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:43:56.223442    3296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:43:56.258876    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:43:56.294527    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:43:56.327102    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:43:56.360132    3296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:43:56.378154    3296 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 12:43:56.390095    3296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:43:56.422878    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:43:56.636016    3296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:43:56.671057    3296 start.go:494] detecting cgroup driver to use...
	I0429 12:43:56.683519    3296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:43:56.709138    3296 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 12:43:56.709204    3296 command_runner.go:130] > [Unit]
	I0429 12:43:56.709204    3296 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 12:43:56.709204    3296 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 12:43:56.709204    3296 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 12:43:56.709204    3296 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 12:43:56.709204    3296 command_runner.go:130] > StartLimitBurst=3
	I0429 12:43:56.709204    3296 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 12:43:56.709204    3296 command_runner.go:130] > [Service]
	I0429 12:43:56.709204    3296 command_runner.go:130] > Type=notify
	I0429 12:43:56.709204    3296 command_runner.go:130] > Restart=on-failure
	I0429 12:43:56.709204    3296 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 12:43:56.709204    3296 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 12:43:56.709204    3296 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 12:43:56.709204    3296 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 12:43:56.709204    3296 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 12:43:56.709204    3296 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 12:43:56.709204    3296 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 12:43:56.709204    3296 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 12:43:56.709204    3296 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 12:43:56.709204    3296 command_runner.go:130] > ExecStart=
	I0429 12:43:56.709204    3296 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 12:43:56.709204    3296 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 12:43:56.709204    3296 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 12:43:56.709204    3296 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 12:43:56.709204    3296 command_runner.go:130] > LimitNOFILE=infinity
	I0429 12:43:56.709204    3296 command_runner.go:130] > LimitNPROC=infinity
	I0429 12:43:56.709204    3296 command_runner.go:130] > LimitCORE=infinity
	I0429 12:43:56.709204    3296 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 12:43:56.709204    3296 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 12:43:56.709204    3296 command_runner.go:130] > TasksMax=infinity
	I0429 12:43:56.709204    3296 command_runner.go:130] > TimeoutStartSec=0
	I0429 12:43:56.709204    3296 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 12:43:56.709204    3296 command_runner.go:130] > Delegate=yes
	I0429 12:43:56.709204    3296 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 12:43:56.709204    3296 command_runner.go:130] > KillMode=process
	I0429 12:43:56.709204    3296 command_runner.go:130] > [Install]
	I0429 12:43:56.709204    3296 command_runner.go:130] > WantedBy=multi-user.target
	I0429 12:43:56.724078    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:43:56.760055    3296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:43:56.804342    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:43:56.841223    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:43:56.879244    3296 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:43:56.945463    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:43:56.969681    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:43:57.013978    3296 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 12:43:57.026023    3296 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:43:57.032826    3296 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 12:43:57.044975    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:43:57.063795    3296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:43:57.110207    3296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:43:57.317699    3296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:43:57.510634    3296 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:43:57.510894    3296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:43:57.561438    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:43:57.760225    3296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:44:00.316595    3296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5563503s)
	I0429 12:44:00.335164    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 12:44:00.373198    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:44:00.408144    3296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 12:44:00.623820    3296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 12:44:00.830313    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:44:01.044370    3296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 12:44:01.092258    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:44:01.128927    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:44:01.339615    3296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 12:44:01.448476    3296 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 12:44:01.462973    3296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 12:44:01.471348    3296 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 12:44:01.472099    3296 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 12:44:01.472099    3296 command_runner.go:130] > Device: 0,22	Inode: 885         Links: 1
	I0429 12:44:01.472099    3296 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 12:44:01.472099    3296 command_runner.go:130] > Access: 2024-04-29 12:44:01.364927212 +0000
	I0429 12:44:01.472099    3296 command_runner.go:130] > Modify: 2024-04-29 12:44:01.364927212 +0000
	I0429 12:44:01.472099    3296 command_runner.go:130] > Change: 2024-04-29 12:44:01.368927212 +0000
	I0429 12:44:01.472099    3296 command_runner.go:130] >  Birth: -
	I0429 12:44:01.472099    3296 start.go:562] Will wait 60s for crictl version
	I0429 12:44:01.486507    3296 ssh_runner.go:195] Run: which crictl
	I0429 12:44:01.492886    3296 command_runner.go:130] > /usr/bin/crictl
	I0429 12:44:01.507784    3296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:44:01.570510    3296 command_runner.go:130] > Version:  0.1.0
	I0429 12:44:01.570510    3296 command_runner.go:130] > RuntimeName:  docker
	I0429 12:44:01.570510    3296 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 12:44:01.570510    3296 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 12:44:01.570510    3296 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 12:44:01.581217    3296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:44:01.613056    3296 command_runner.go:130] > 26.0.2
	I0429 12:44:01.624406    3296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:44:01.656934    3296 command_runner.go:130] > 26.0.2
	I0429 12:44:01.665478    3296 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 12:44:01.665478    3296 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 12:44:01.669659    3296 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 12:44:01.669659    3296 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 12:44:01.669659    3296 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 12:44:01.669659    3296 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:da:8e:53 Flags:up|broadcast|multicast|running}
	I0429 12:44:01.672596    3296 ip.go:210] interface addr: fe80::e4d4:6d70:21fb:68f3/64
	I0429 12:44:01.672596    3296 ip.go:210] interface addr: 172.26.176.1/20
	I0429 12:44:01.687109    3296 ssh_runner.go:195] Run: grep 172.26.176.1	host.minikube.internal$ /etc/hosts
	I0429 12:44:01.693840    3296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:44:01.717667    3296 kubeadm.go:877] updating cluster {Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 12:44:01.717874    3296 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:44:01.729576    3296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:44:01.753147    3296 docker.go:685] Got preloaded images: 
	I0429 12:44:01.753147    3296 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 12:44:01.767100    3296 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:44:01.785012    3296 command_runner.go:139] > {"Repositories":{}}
	I0429 12:44:01.798929    3296 ssh_runner.go:195] Run: which lz4
	I0429 12:44:01.805633    3296 command_runner.go:130] > /usr/bin/lz4
	I0429 12:44:01.805633    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 12:44:01.819826    3296 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 12:44:01.825751    3296 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 12:44:01.826519    3296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 12:44:01.826519    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 12:44:03.851411    3296 docker.go:649] duration metric: took 2.0457613s to copy over tarball
	I0429 12:44:03.867833    3296 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 12:44:12.753232    3296 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.8852851s)
	I0429 12:44:12.753314    3296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 12:44:12.822086    3296 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:44:12.840727    3296 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0429 12:44:12.841096    3296 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 12:44:12.894613    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:44:13.126976    3296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:44:16.488560    3296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3606689s)
	I0429 12:44:16.498170    3296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 12:44:16.525752    3296 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 12:44:16.525752    3296 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 12:44:16.525752    3296 cache_images.go:84] Images are preloaded, skipping loading
	I0429 12:44:16.525752    3296 kubeadm.go:928] updating node { 172.26.185.116 8443 v1.30.0 docker true true} ...
	I0429 12:44:16.525752    3296 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-409200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.185.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:44:16.535787    3296 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 12:44:16.573357    3296 command_runner.go:130] > cgroupfs
	I0429 12:44:16.574213    3296 cni.go:84] Creating CNI manager for ""
	I0429 12:44:16.574213    3296 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:44:16.574304    3296 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 12:44:16.574304    3296 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.185.116 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-409200 NodeName:multinode-409200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.185.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.185.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 12:44:16.574671    3296 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.185.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-409200"
	  kubeletExtraArgs:
	    node-ip: 172.26.185.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.185.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 12:44:16.587109    3296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:44:16.607116    3296 command_runner.go:130] > kubeadm
	I0429 12:44:16.607116    3296 command_runner.go:130] > kubectl
	I0429 12:44:16.607116    3296 command_runner.go:130] > kubelet
	I0429 12:44:16.607486    3296 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 12:44:16.619346    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 12:44:16.637355    3296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0429 12:44:16.671622    3296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:44:16.704800    3296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0429 12:44:16.752839    3296 ssh_runner.go:195] Run: grep 172.26.185.116	control-plane.minikube.internal$ /etc/hosts
	I0429 12:44:16.760084    3296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.185.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:44:16.797647    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:44:17.006548    3296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:44:17.033894    3296 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200 for IP: 172.26.185.116
	I0429 12:44:17.033894    3296 certs.go:194] generating shared ca certs ...
	I0429 12:44:17.034052    3296 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.034597    3296 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 12:44:17.035031    3296 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 12:44:17.035221    3296 certs.go:256] generating profile certs ...
	I0429 12:44:17.036085    3296 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.key
	I0429 12:44:17.036211    3296 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.crt with IP's: []
	I0429 12:44:17.301116    3296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.crt ...
	I0429 12:44:17.302129    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.crt: {Name:mkfee835225f0dcf0ca6b08c61d512a13d0301a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.303376    3296 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.key ...
	I0429 12:44:17.303376    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.key: {Name:mk4d7a0cb775c99aef602c36f31814957f63535b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.304404    3296 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.09b66b62
	I0429 12:44:17.304404    3296 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.09b66b62 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.26.185.116]
	I0429 12:44:17.586870    3296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.09b66b62 ...
	I0429 12:44:17.586870    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.09b66b62: {Name:mk0a0a342ca8f742883109c474511a24825717f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.588592    3296 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.09b66b62 ...
	I0429 12:44:17.588592    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.09b66b62: {Name:mk32857892243135c3cbfe168f73f05a5d58da10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.589224    3296 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.09b66b62 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt
	I0429 12:44:17.606459    3296 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.09b66b62 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key
	I0429 12:44:17.607893    3296 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key
	I0429 12:44:17.608084    3296 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt with IP's: []
	I0429 12:44:17.874409    3296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt ...
	I0429 12:44:17.874409    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt: {Name:mkd8d2745eb84bf562904d25d78a7b0493e0cb19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.876814    3296 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key ...
	I0429 12:44:17.876814    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key: {Name:mk7bf6bbe7b08ba2b2f94cfa54674c3d6223c5d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.877250    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:44:17.878152    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:44:17.878341    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:44:17.878507    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:44:17.878669    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:44:17.878826    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:44:17.878978    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:44:17.888209    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:44:17.888566    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem (1338 bytes)
	W0429 12:44:17.889216    3296 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496_empty.pem, impossibly tiny 0 bytes
	I0429 12:44:17.889216    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0429 12:44:17.889604    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 12:44:17.889823    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 12:44:17.890158    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 12:44:17.890381    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem (1708 bytes)
	I0429 12:44:17.890381    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem -> /usr/share/ca-certificates/8496.pem
	I0429 12:44:17.890381    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /usr/share/ca-certificates/84962.pem
	I0429 12:44:17.890381    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:44:17.892657    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:44:17.937012    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 12:44:17.975119    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:44:18.025670    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 12:44:18.074631    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 12:44:18.125186    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 12:44:18.175688    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:44:18.226917    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:44:18.280971    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem --> /usr/share/ca-certificates/8496.pem (1338 bytes)
	I0429 12:44:18.331289    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /usr/share/ca-certificates/84962.pem (1708 bytes)
	I0429 12:44:18.381170    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:44:18.427231    3296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 12:44:18.475141    3296 ssh_runner.go:195] Run: openssl version
	I0429 12:44:18.484326    3296 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 12:44:18.499276    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84962.pem && ln -fs /usr/share/ca-certificates/84962.pem /etc/ssl/certs/84962.pem"
	I0429 12:44:18.538050    3296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84962.pem
	I0429 12:44:18.546253    3296 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 12:44:18.546253    3296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 12:44:18.560608    3296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84962.pem
	I0429 12:44:18.570893    3296 command_runner.go:130] > 3ec20f2e
	I0429 12:44:18.588308    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84962.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:44:18.624233    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:44:18.663145    3296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:44:18.670597    3296 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:44:18.670597    3296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:44:18.685901    3296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:44:18.695104    3296 command_runner.go:130] > b5213941
	I0429 12:44:18.706505    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:44:18.741866    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8496.pem && ln -fs /usr/share/ca-certificates/8496.pem /etc/ssl/certs/8496.pem"
	I0429 12:44:18.774617    3296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8496.pem
	I0429 12:44:18.781062    3296 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 12:44:18.781228    3296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 12:44:18.796432    3296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8496.pem
	I0429 12:44:18.804880    3296 command_runner.go:130] > 51391683
	I0429 12:44:18.819968    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8496.pem /etc/ssl/certs/51391683.0"
	I0429 12:44:18.854731    3296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:44:18.859928    3296 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:44:18.860911    3296 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:44:18.860911    3296 kubeadm.go:391] StartCluster: {Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:44:18.872252    3296 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 12:44:18.908952    3296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 12:44:18.928831    3296 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0429 12:44:18.929074    3296 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0429 12:44:18.929074    3296 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0429 12:44:18.944379    3296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 12:44:18.974571    3296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 12:44:18.994214    3296 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0429 12:44:18.994214    3296 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0429 12:44:18.994214    3296 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0429 12:44:18.994214    3296 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 12:44:18.994214    3296 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 12:44:18.994214    3296 kubeadm.go:156] found existing configuration files:
	
	I0429 12:44:19.008068    3296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 12:44:19.025289    3296 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 12:44:19.026305    3296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 12:44:19.039591    3296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 12:44:19.081589    3296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 12:44:19.102122    3296 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 12:44:19.102278    3296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 12:44:19.115510    3296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 12:44:19.146531    3296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 12:44:19.164905    3296 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 12:44:19.164905    3296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 12:44:19.181666    3296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 12:44:19.213432    3296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 12:44:19.234056    3296 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 12:44:19.234643    3296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 12:44:19.246709    3296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 12:44:19.265779    3296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 12:44:19.521753    3296 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 12:44:19.521821    3296 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0429 12:44:19.522093    3296 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 12:44:19.522175    3296 command_runner.go:130] > [preflight] Running pre-flight checks
	I0429 12:44:19.707934    3296 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 12:44:19.707934    3296 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 12:44:19.708156    3296 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 12:44:19.708156    3296 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 12:44:19.708156    3296 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 12:44:19.708156    3296 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 12:44:20.023840    3296 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 12:44:20.023840    3296 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 12:44:20.029443    3296 out.go:204]   - Generating certificates and keys ...
	I0429 12:44:20.029588    3296 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0429 12:44:20.029588    3296 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 12:44:20.029757    3296 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0429 12:44:20.029823    3296 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 12:44:20.369033    3296 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 12:44:20.369103    3296 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 12:44:20.476523    3296 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 12:44:20.476523    3296 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0429 12:44:20.776704    3296 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 12:44:20.776760    3296 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0429 12:44:21.061534    3296 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 12:44:21.061650    3296 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0429 12:44:21.304438    3296 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0429 12:44:21.304438    3296 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 12:44:21.304438    3296 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-409200] and IPs [172.26.185.116 127.0.0.1 ::1]
	I0429 12:44:21.304438    3296 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-409200] and IPs [172.26.185.116 127.0.0.1 ::1]
	I0429 12:44:21.896641    3296 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 12:44:21.897606    3296 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0429 12:44:21.897866    3296 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-409200] and IPs [172.26.185.116 127.0.0.1 ::1]
	I0429 12:44:21.898051    3296 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-409200] and IPs [172.26.185.116 127.0.0.1 ::1]
	I0429 12:44:22.003777    3296 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 12:44:22.003777    3296 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 12:44:22.188658    3296 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 12:44:22.188658    3296 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 12:44:22.373946    3296 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 12:44:22.373946    3296 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0429 12:44:22.374390    3296 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 12:44:22.374390    3296 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 12:44:22.494389    3296 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 12:44:22.495356    3296 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 12:44:22.609117    3296 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 12:44:22.609248    3296 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 12:44:22.737208    3296 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 12:44:22.737208    3296 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 12:44:22.999498    3296 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 12:44:22.999498    3296 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 12:44:23.233231    3296 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 12:44:23.233920    3296 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 12:44:23.234997    3296 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 12:44:23.235061    3296 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 12:44:23.242239    3296 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 12:44:23.242239    3296 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 12:44:23.248689    3296 out.go:204]   - Booting up control plane ...
	I0429 12:44:23.248689    3296 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 12:44:23.248689    3296 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 12:44:23.248689    3296 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 12:44:23.248689    3296 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 12:44:23.249337    3296 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 12:44:23.249337    3296 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 12:44:23.289260    3296 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:44:23.289318    3296 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:44:23.293418    3296 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:44:23.293418    3296 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:44:23.293418    3296 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 12:44:23.293418    3296 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 12:44:23.533729    3296 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 12:44:23.533729    3296 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 12:44:23.533729    3296 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:44:23.533729    3296 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:44:24.536142    3296 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002273096s
	I0429 12:44:24.536438    3296 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002273096s
	I0429 12:44:24.536769    3296 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 12:44:24.536769    3296 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 12:44:32.037593    3296 kubeadm.go:309] [api-check] The API server is healthy after 7.502621698s
	I0429 12:44:32.038456    3296 command_runner.go:130] > [api-check] The API server is healthy after 7.502621698s
	I0429 12:44:32.060219    3296 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 12:44:32.060334    3296 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 12:44:32.091007    3296 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 12:44:32.091546    3296 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 12:44:32.144604    3296 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 12:44:32.144604    3296 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0429 12:44:32.145060    3296 command_runner.go:130] > [mark-control-plane] Marking the node multinode-409200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 12:44:32.145060    3296 kubeadm.go:309] [mark-control-plane] Marking the node multinode-409200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 12:44:32.161257    3296 command_runner.go:130] > [bootstrap-token] Using token: yfqpmq.jq2ry4kf0oz9zbyr
	I0429 12:44:32.161257    3296 kubeadm.go:309] [bootstrap-token] Using token: yfqpmq.jq2ry4kf0oz9zbyr
	I0429 12:44:32.164440    3296 out.go:204]   - Configuring RBAC rules ...
	I0429 12:44:32.164650    3296 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 12:44:32.164730    3296 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 12:44:32.173392    3296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 12:44:32.173466    3296 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 12:44:32.192810    3296 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 12:44:32.192895    3296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 12:44:32.198990    3296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 12:44:32.198990    3296 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 12:44:32.207240    3296 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 12:44:32.207347    3296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 12:44:32.220434    3296 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 12:44:32.220434    3296 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 12:44:32.454502    3296 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 12:44:32.454502    3296 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 12:44:32.926433    3296 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 12:44:32.926433    3296 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0429 12:44:33.459427    3296 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 12:44:33.459540    3296 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0429 12:44:33.460905    3296 kubeadm.go:309] 
	I0429 12:44:33.460905    3296 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 12:44:33.460905    3296 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0429 12:44:33.460905    3296 kubeadm.go:309] 
	I0429 12:44:33.460905    3296 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 12:44:33.460905    3296 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0429 12:44:33.460905    3296 kubeadm.go:309] 
	I0429 12:44:33.461661    3296 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 12:44:33.461661    3296 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0429 12:44:33.461841    3296 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 12:44:33.461841    3296 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 12:44:33.461841    3296 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 12:44:33.462057    3296 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 12:44:33.462057    3296 kubeadm.go:309] 
	I0429 12:44:33.462176    3296 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0429 12:44:33.462176    3296 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 12:44:33.462176    3296 kubeadm.go:309] 
	I0429 12:44:33.462176    3296 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 12:44:33.462176    3296 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 12:44:33.462176    3296 kubeadm.go:309] 
	I0429 12:44:33.462176    3296 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0429 12:44:33.462176    3296 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 12:44:33.462721    3296 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 12:44:33.462721    3296 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 12:44:33.462721    3296 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 12:44:33.462900    3296 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 12:44:33.462900    3296 kubeadm.go:309] 
	I0429 12:44:33.463040    3296 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0429 12:44:33.463040    3296 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 12:44:33.463040    3296 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 12:44:33.463040    3296 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0429 12:44:33.463040    3296 kubeadm.go:309] 
	I0429 12:44:33.463040    3296 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token yfqpmq.jq2ry4kf0oz9zbyr \
	I0429 12:44:33.463040    3296 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yfqpmq.jq2ry4kf0oz9zbyr \
	I0429 12:44:33.463831    3296 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a \
	I0429 12:44:33.463927    3296 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a \
	I0429 12:44:33.464158    3296 command_runner.go:130] > 	--control-plane 
	I0429 12:44:33.464222    3296 kubeadm.go:309] 	--control-plane 
	I0429 12:44:33.464222    3296 kubeadm.go:309] 
	I0429 12:44:33.464222    3296 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0429 12:44:33.464364    3296 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 12:44:33.464438    3296 kubeadm.go:309] 
	I0429 12:44:33.464625    3296 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yfqpmq.jq2ry4kf0oz9zbyr \
	I0429 12:44:33.464625    3296 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token yfqpmq.jq2ry4kf0oz9zbyr \
	I0429 12:44:33.464755    3296 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a 
	I0429 12:44:33.464832    3296 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a 
	I0429 12:44:33.465037    3296 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:44:33.465037    3296 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:44:33.465099    3296 cni.go:84] Creating CNI manager for ""
	I0429 12:44:33.465167    3296 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:44:33.468388    3296 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 12:44:33.482496    3296 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 12:44:33.490673    3296 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0429 12:44:33.490673    3296 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0429 12:44:33.491156    3296 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0429 12:44:33.491156    3296 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 12:44:33.491156    3296 command_runner.go:130] > Access: 2024-04-29 12:42:36.251857500 +0000
	I0429 12:44:33.491156    3296 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0429 12:44:33.491225    3296 command_runner.go:130] > Change: 2024-04-29 12:42:28.230000000 +0000
	I0429 12:44:33.491225    3296 command_runner.go:130] >  Birth: -
	I0429 12:44:33.491369    3296 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 12:44:33.491369    3296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 12:44:33.549690    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 12:44:34.258026    3296 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0429 12:44:34.258122    3296 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0429 12:44:34.258122    3296 command_runner.go:130] > serviceaccount/kindnet created
	I0429 12:44:34.258122    3296 command_runner.go:130] > daemonset.apps/kindnet created
	I0429 12:44:34.258191    3296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 12:44:34.274018    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:34.274904    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-409200 minikube.k8s.io/updated_at=2024_04_29T12_44_34_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d minikube.k8s.io/name=multinode-409200 minikube.k8s.io/primary=true
	I0429 12:44:34.289779    3296 command_runner.go:130] > -16
	I0429 12:44:34.290424    3296 ops.go:34] apiserver oom_adj: -16
	I0429 12:44:34.463193    3296 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0429 12:44:34.463193    3296 command_runner.go:130] > node/multinode-409200 labeled
	I0429 12:44:34.478175    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:34.590305    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:34.977286    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:35.090759    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:35.480369    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:35.602599    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:35.990560    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:36.099579    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:36.479582    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:36.601445    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:36.981693    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:37.090673    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:37.484965    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:37.603109    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:37.982196    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:38.096571    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:38.484642    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:38.609982    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:38.985308    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:39.096468    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:39.489292    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:39.606454    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:39.992451    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:40.124908    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:40.481016    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:40.598978    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:40.981506    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:41.094480    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:41.487459    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:41.607630    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:41.983291    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:42.114225    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:42.491967    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:42.636510    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:42.984042    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:43.144920    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:43.485736    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:43.607086    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:43.987859    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:44.098597    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:44.479282    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:44.598110    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:44.982334    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:45.112645    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:45.487733    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:45.604298    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:45.993246    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:46.110503    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:46.489099    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:46.598695    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:46.992710    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:47.118922    3296 command_runner.go:130] > NAME      SECRETS   AGE
	I0429 12:44:47.118922    3296 command_runner.go:130] > default   0         1s
	I0429 12:44:47.118922    3296 kubeadm.go:1107] duration metric: took 12.8606308s to wait for elevateKubeSystemPrivileges
	W0429 12:44:47.118922    3296 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 12:44:47.118922    3296 kubeadm.go:393] duration metric: took 28.257791s to StartCluster
	I0429 12:44:47.118922    3296 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:47.119947    3296 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 12:44:47.120913    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:47.123001    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 12:44:47.123001    3296 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 12:44:47.123001    3296 addons.go:69] Setting storage-provisioner=true in profile "multinode-409200"
	I0429 12:44:47.123001    3296 addons.go:234] Setting addon storage-provisioner=true in "multinode-409200"
	I0429 12:44:47.123001    3296 start.go:234] Will wait 6m0s for node &{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:44:47.123001    3296 host.go:66] Checking if "multinode-409200" exists ...
	I0429 12:44:47.123001    3296 addons.go:69] Setting default-storageclass=true in profile "multinode-409200"
	I0429 12:44:47.123001    3296 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:44:47.128930    3296 out.go:177] * Verifying Kubernetes components...
	I0429 12:44:47.123918    3296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-409200"
	I0429 12:44:47.124922    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:44:47.129918    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:44:47.147920    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:44:47.387572    3296 command_runner.go:130] > apiVersion: v1
	I0429 12:44:47.387572    3296 command_runner.go:130] > data:
	I0429 12:44:47.387572    3296 command_runner.go:130] >   Corefile: |
	I0429 12:44:47.387572    3296 command_runner.go:130] >     .:53 {
	I0429 12:44:47.387572    3296 command_runner.go:130] >         errors
	I0429 12:44:47.387572    3296 command_runner.go:130] >         health {
	I0429 12:44:47.387572    3296 command_runner.go:130] >            lameduck 5s
	I0429 12:44:47.387572    3296 command_runner.go:130] >         }
	I0429 12:44:47.387572    3296 command_runner.go:130] >         ready
	I0429 12:44:47.388041    3296 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0429 12:44:47.388041    3296 command_runner.go:130] >            pods insecure
	I0429 12:44:47.388041    3296 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0429 12:44:47.388041    3296 command_runner.go:130] >            ttl 30
	I0429 12:44:47.388041    3296 command_runner.go:130] >         }
	I0429 12:44:47.388128    3296 command_runner.go:130] >         prometheus :9153
	I0429 12:44:47.388128    3296 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0429 12:44:47.388171    3296 command_runner.go:130] >            max_concurrent 1000
	I0429 12:44:47.388171    3296 command_runner.go:130] >         }
	I0429 12:44:47.388171    3296 command_runner.go:130] >         cache 30
	I0429 12:44:47.388220    3296 command_runner.go:130] >         loop
	I0429 12:44:47.388220    3296 command_runner.go:130] >         reload
	I0429 12:44:47.388220    3296 command_runner.go:130] >         loadbalance
	I0429 12:44:47.388220    3296 command_runner.go:130] >     }
	I0429 12:44:47.388220    3296 command_runner.go:130] > kind: ConfigMap
	I0429 12:44:47.388329    3296 command_runner.go:130] > metadata:
	I0429 12:44:47.388329    3296 command_runner.go:130] >   creationTimestamp: "2024-04-29T12:44:32Z"
	I0429 12:44:47.388329    3296 command_runner.go:130] >   name: coredns
	I0429 12:44:47.388329    3296 command_runner.go:130] >   namespace: kube-system
	I0429 12:44:47.388329    3296 command_runner.go:130] >   resourceVersion: "227"
	I0429 12:44:47.388329    3296 command_runner.go:130] >   uid: 11d612e9-bdbd-4d3c-bda3-1675a32714c4
	I0429 12:44:47.391023    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.26.176.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 12:44:47.585846    3296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:44:47.981142    3296 command_runner.go:130] > configmap/coredns replaced
	I0429 12:44:47.981263    3296 start.go:946] {"host.minikube.internal": 172.26.176.1} host record injected into CoreDNS's ConfigMap
	I0429 12:44:47.982700    3296 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 12:44:47.984024    3296 kapi.go:59] client config for multinode-409200: &rest.Config{Host:"https://172.26.185.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:44:47.984268    3296 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 12:44:47.985537    3296 kapi.go:59] client config for multinode-409200: &rest.Config{Host:"https://172.26.185.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:44:47.986109    3296 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 12:44:47.986109    3296 node_ready.go:35] waiting up to 6m0s for node "multinode-409200" to be "Ready" ...
	I0429 12:44:47.986777    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:47.986861    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:47.986861    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:47.986941    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:47.987125    3296 round_trippers.go:463] GET https://172.26.185.116:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 12:44:47.987125    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:47.987125    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:47.987125    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:48.027978    3296 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0429 12:44:48.028758    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:48.028758    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:48 GMT
	I0429 12:44:48.028758    3296 round_trippers.go:580]     Audit-Id: f314a025-2e4e-4940-8cd7-8ecee51f4571
	I0429 12:44:48.028758    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:48.028758    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:48.028758    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:48.028758    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:48.029450    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:48.030717    3296 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0429 12:44:48.031252    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:48.031252    3296 round_trippers.go:580]     Content-Length: 291
	I0429 12:44:48.031339    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:48 GMT
	I0429 12:44:48.031377    3296 round_trippers.go:580]     Audit-Id: f5c59814-cf4b-4333-b29a-eb0d79883cc3
	I0429 12:44:48.031377    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:48.031377    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:48.031377    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:48.031377    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:48.031441    3296 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c17d232d-8e7d-4693-9199-8cabf54e5d48","resourceVersion":"357","creationTimestamp":"2024-04-29T12:44:32Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 12:44:48.032296    3296 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c17d232d-8e7d-4693-9199-8cabf54e5d48","resourceVersion":"357","creationTimestamp":"2024-04-29T12:44:32Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 12:44:48.032426    3296 round_trippers.go:463] PUT https://172.26.185.116:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 12:44:48.032487    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:48.032487    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:48.032487    3296 round_trippers.go:473]     Content-Type: application/json
	I0429 12:44:48.032487    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:48.063859    3296 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0429 12:44:48.063859    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:48.063859    3296 round_trippers.go:580]     Content-Length: 291
	I0429 12:44:48.063859    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:48 GMT
	I0429 12:44:48.063859    3296 round_trippers.go:580]     Audit-Id: 4609687e-4e83-49b1-961a-97562f2387dc
	I0429 12:44:48.063859    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:48.063859    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:48.063859    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:48.063859    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:48.063859    3296 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c17d232d-8e7d-4693-9199-8cabf54e5d48","resourceVersion":"359","creationTimestamp":"2024-04-29T12:44:32Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 12:44:48.497157    3296 round_trippers.go:463] GET https://172.26.185.116:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 12:44:48.497157    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:48.497157    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:48.497157    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:48.497157    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:48.497157    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:48.497157    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:48.497157    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:48.507102    3296 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 12:44:48.507369    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:48.507369    3296 round_trippers.go:580]     Audit-Id: e6440a59-e8b8-4918-8657-6f67c72b256e
	I0429 12:44:48.507369    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:48.507369    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:48.507369    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:48.507369    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:48.507369    3296 round_trippers.go:580]     Content-Length: 291
	I0429 12:44:48.507369    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:48 GMT
	I0429 12:44:48.507457    3296 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c17d232d-8e7d-4693-9199-8cabf54e5d48","resourceVersion":"369","creationTimestamp":"2024-04-29T12:44:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0429 12:44:48.507595    3296 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-409200" context rescaled to 1 replicas
	I0429 12:44:48.518111    3296 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0429 12:44:48.518111    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:48.518111    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:48 GMT
	I0429 12:44:48.518111    3296 round_trippers.go:580]     Audit-Id: 5431e861-130c-4816-bedd-bbfa55282ccf
	I0429 12:44:48.518111    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:48.518111    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:48.518111    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:48.518111    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:48.518111    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:48.987654    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:48.987654    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:48.987654    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:48.987753    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:48.994725    3296 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:44:48.994835    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:48.994943    3296 round_trippers.go:580]     Audit-Id: 4beb9ef9-2883-408d-98a1-5f73aac11dc7
	I0429 12:44:48.994943    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:48.994943    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:48.994943    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:48.994943    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:48.994943    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:48 GMT
	I0429 12:44:48.995220    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:49.429025    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:44:49.429931    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:49.430156    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:44:49.430156    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:49.434434    3296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 12:44:49.431380    3296 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 12:44:49.437224    3296 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:44:49.437224    3296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 12:44:49.437224    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:44:49.437224    3296 kapi.go:59] client config for multinode-409200: &rest.Config{Host:"https://172.26.185.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:44:49.438210    3296 addons.go:234] Setting addon default-storageclass=true in "multinode-409200"
	I0429 12:44:49.438210    3296 host.go:66] Checking if "multinode-409200" exists ...
	I0429 12:44:49.439218    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:44:49.493646    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:49.493711    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:49.493711    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:49.493711    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:49.497211    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:49.497952    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:49.497952    3296 round_trippers.go:580]     Audit-Id: 17c3ae84-c79d-4230-aca9-cd037a4c1fed
	I0429 12:44:49.497952    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:49.497952    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:49.498047    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:49.498047    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:49.498047    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:49 GMT
	I0429 12:44:49.499254    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:49.987901    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:49.988181    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:49.988181    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:49.988181    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:49.991502    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:49.991502    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:49.991502    3296 round_trippers.go:580]     Audit-Id: f90ac51b-8b26-4bd0-8e1a-4de276b4ecc4
	I0429 12:44:49.991502    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:49.992231    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:49.992231    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:49.992231    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:49.992231    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:49 GMT
	I0429 12:44:49.993343    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:49.993502    3296 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 12:44:50.497126    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:50.497366    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:50.497366    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:50.497462    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:50.502633    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:44:50.502633    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:50.502633    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:50.502633    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:50.502633    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:50.502633    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:50.502633    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:50 GMT
	I0429 12:44:50.502633    3296 round_trippers.go:580]     Audit-Id: 6d838465-5f59-4111-b729-d239b60ad1e5
	I0429 12:44:50.503643    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:50.989937    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:50.990001    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:50.990001    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:50.990001    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:50.993584    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:50.993584    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:50.993584    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:50.993584    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:50.993584    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:50.993584    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:50.993584    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:50 GMT
	I0429 12:44:50.993584    3296 round_trippers.go:580]     Audit-Id: 6ea1b049-c95f-4a57-b156-184f4e7a532d
	I0429 12:44:50.993584    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:51.499582    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:51.499703    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:51.499703    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:51.499703    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:51.504097    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:44:51.504255    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:51.504255    3296 round_trippers.go:580]     Audit-Id: 129b3891-9b73-4491-96d4-b549272136b0
	I0429 12:44:51.504255    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:51.504255    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:51.504255    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:51.504332    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:51.504332    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:51 GMT
	I0429 12:44:51.505870    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:51.725276    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:44:51.725276    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:51.725276    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:44:51.812345    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:44:51.813143    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:51.813323    3296 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 12:44:51.813345    3296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 12:44:51.813446    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:44:51.990003    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:51.990243    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:51.990243    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:51.990243    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:51.994853    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:51.994961    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:51.994961    3296 round_trippers.go:580]     Audit-Id: fd025c19-6e78-4182-91a3-cabf4cd9eef4
	I0429 12:44:51.994961    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:51.994961    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:51.995033    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:51.995033    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:51.995033    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:51 GMT
	I0429 12:44:51.995280    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:51.995831    3296 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 12:44:52.495839    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:52.495906    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:52.495906    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:52.495972    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:52.499456    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:52.500476    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:52.500476    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:52.500476    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:52.500476    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:52.500476    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:52.500476    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:52 GMT
	I0429 12:44:52.500476    3296 round_trippers.go:580]     Audit-Id: 9580c8ca-591c-47ad-be84-8fee1ba5737e
	I0429 12:44:52.500476    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:52.988440    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:52.988440    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:52.988440    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:52.988440    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:52.992068    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:52.992068    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:52.992068    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:52.992346    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:52.992346    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:52.992346    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:52 GMT
	I0429 12:44:52.992346    3296 round_trippers.go:580]     Audit-Id: aea305e3-0942-47d6-b689-14b4a7afbb67
	I0429 12:44:52.992346    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:52.993054    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:53.492686    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:53.492686    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:53.492686    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:53.492686    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:53.497503    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:44:53.497879    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:53.497940    3296 round_trippers.go:580]     Audit-Id: 1121d8c2-9418-49fb-9d73-ebb54f0d7b5e
	I0429 12:44:53.497940    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:53.497940    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:53.497940    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:53.497940    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:53.497940    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:53 GMT
	I0429 12:44:53.498274    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:53.987193    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:53.987252    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:53.987252    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:53.987252    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:53.990798    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:53.990798    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:53.990798    3296 round_trippers.go:580]     Audit-Id: b800aac0-4546-44f3-adc5-a0d7d3b96135
	I0429 12:44:53.990798    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:53.990798    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:53.990798    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:53.990798    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:53.990798    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:53 GMT
	I0429 12:44:53.990798    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:54.053780    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:44:54.053780    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:54.053780    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:44:54.393125    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:44:54.393125    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:54.394344    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:44:54.494172    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:54.494172    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:54.494172    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:54.494172    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:54.498184    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:44:54.498184    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:54.498184    3296 round_trippers.go:580]     Audit-Id: 0ce7ac22-e96e-4cc8-a586-c8fb2a5464cf
	I0429 12:44:54.498184    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:54.498184    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:54.498184    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:54.498184    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:54.498184    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:54 GMT
	I0429 12:44:54.498184    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:54.499180    3296 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 12:44:54.542173    3296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:44:54.987695    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:54.987695    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:54.987695    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:54.987695    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:55.121049    3296 round_trippers.go:574] Response Status: 200 OK in 133 milliseconds
	I0429 12:44:55.121049    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:55.121154    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:55.121154    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:55.121154    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:55.121154    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:55.121154    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:55 GMT
	I0429 12:44:55.121154    3296 round_trippers.go:580]     Audit-Id: d2cf4358-518d-4e2b-b3b4-d9d06fb318d4
	I0429 12:44:55.121498    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:55.283741    3296 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0429 12:44:55.283834    3296 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0429 12:44:55.283897    3296 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 12:44:55.283897    3296 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 12:44:55.283897    3296 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0429 12:44:55.283897    3296 command_runner.go:130] > pod/storage-provisioner created
	I0429 12:44:55.498208    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:55.498269    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:55.498269    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:55.498269    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:55.501876    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:55.501876    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:55.501954    3296 round_trippers.go:580]     Audit-Id: 12ca5c45-8941-45dc-84a9-4159fc888677
	I0429 12:44:55.501954    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:55.501954    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:55.501954    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:55.501954    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:55.501954    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:55 GMT
	I0429 12:44:55.502158    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:55.992048    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:55.992048    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:55.992048    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:55.992048    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:55.994904    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:44:55.994904    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:55.995929    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:55.995929    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:55.995929    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:55 GMT
	I0429 12:44:55.996002    3296 round_trippers.go:580]     Audit-Id: 6ec09ca9-3be4-4d9b-be97-56d8f9a7a96d
	I0429 12:44:55.996002    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:55.996002    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:55.996284    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:56.499993    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:56.500060    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:56.500060    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:56.500060    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:56.503881    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:56.503881    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:56.503881    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:56.503881    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:56.504419    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:56 GMT
	I0429 12:44:56.504419    3296 round_trippers.go:580]     Audit-Id: 1e1a4dde-85ad-4c09-9ab3-a55c1ea5bb43
	I0429 12:44:56.504419    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:56.504419    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:56.504485    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:56.505286    3296 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 12:44:56.639576    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:44:56.640629    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:56.640629    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:44:56.766412    3296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:44:56.941419    3296 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0429 12:44:56.943064    3296 round_trippers.go:463] GET https://172.26.185.116:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 12:44:56.943160    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:56.943160    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:56.943160    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:56.946411    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:56.946476    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:56.946476    3296 round_trippers.go:580]     Audit-Id: 433f1ba2-60c3-44e5-a79c-51b09710afa1
	I0429 12:44:56.946476    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:56.946476    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:56.946476    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:56.946476    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:56.946550    3296 round_trippers.go:580]     Content-Length: 1273
	I0429 12:44:56.946550    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:56 GMT
	I0429 12:44:56.946550    3296 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"397"},"items":[{"metadata":{"name":"standard","uid":"5f5d59b0-3fe5-4a95-8088-dbd2aae085b6","resourceVersion":"397","creationTimestamp":"2024-04-29T12:44:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T12:44:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0429 12:44:56.947079    3296 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5f5d59b0-3fe5-4a95-8088-dbd2aae085b6","resourceVersion":"397","creationTimestamp":"2024-04-29T12:44:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T12:44:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 12:44:56.947239    3296 round_trippers.go:463] PUT https://172.26.185.116:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 12:44:56.947239    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:56.947239    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:56.947239    3296 round_trippers.go:473]     Content-Type: application/json
	I0429 12:44:56.947239    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:56.950933    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:56.950933    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:56.950933    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:56.950933    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:56.950933    3296 round_trippers.go:580]     Content-Length: 1220
	I0429 12:44:56.950933    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:56 GMT
	I0429 12:44:56.951109    3296 round_trippers.go:580]     Audit-Id: 806f726f-3d36-4312-b0f9-6c2058e71382
	I0429 12:44:56.951109    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:56.951109    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:56.951285    3296 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5f5d59b0-3fe5-4a95-8088-dbd2aae085b6","resourceVersion":"397","creationTimestamp":"2024-04-29T12:44:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T12:44:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 12:44:56.954419    3296 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 12:44:56.957679    3296 addons.go:505] duration metric: took 9.8346009s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 12:44:56.986568    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:56.986568    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:56.986682    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:56.986682    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:57.006275    3296 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0429 12:44:57.007061    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:57.007061    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:57.007061    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:57 GMT
	I0429 12:44:57.007061    3296 round_trippers.go:580]     Audit-Id: 6216f9d2-42b3-4511-903b-d7b986c00ed3
	I0429 12:44:57.007061    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:57.007061    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:57.007061    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:57.007061    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:57.487285    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:57.487613    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:57.487613    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:57.487613    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:57.494601    3296 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:44:57.494601    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:57.494601    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:57.494601    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:57.494601    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:57 GMT
	I0429 12:44:57.494601    3296 round_trippers.go:580]     Audit-Id: 2e534af6-cd9e-43af-af28-8c1bcc1f9efa
	I0429 12:44:57.494601    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:57.494601    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:57.495335    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:57.987102    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:57.987102    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:57.987189    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:57.987189    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:57.991241    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:44:57.991314    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:57.991314    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:57 GMT
	I0429 12:44:57.991428    3296 round_trippers.go:580]     Audit-Id: bf0ecebf-ceb3-4179-abb5-b96d55120d71
	I0429 12:44:57.991428    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:57.991428    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:57.991428    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:57.991428    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:57.992189    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:58.486877    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:58.486964    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:58.486964    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:58.486964    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:58.490349    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:58.490349    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:58.490349    3296 round_trippers.go:580]     Audit-Id: 6393d1cf-7a26-4f32-9232-cae6ce627786
	I0429 12:44:58.490662    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:58.490662    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:58.490662    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:58.490662    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:58.490662    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:58 GMT
	I0429 12:44:58.490835    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:58.987199    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:58.987271    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:58.987271    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:58.987271    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:58.991919    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:44:58.991919    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:58.991919    3296 round_trippers.go:580]     Audit-Id: ad4cb3b8-242d-41ca-ad8a-a7d767a7bc16
	I0429 12:44:58.992276    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:58.992276    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:58.992276    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:58.992276    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:58.992276    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:58 GMT
	I0429 12:44:58.992644    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:58.993258    3296 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 12:44:59.499083    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:59.499083    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:59.499083    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:59.499173    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:59.503490    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:44:59.503797    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:59.503797    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:59.503797    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:59.503797    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:59.503797    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:59.503797    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:59 GMT
	I0429 12:44:59.503797    3296 round_trippers.go:580]     Audit-Id: 7d006809-6f37-417c-8d2d-ceabc88f5c0f
	I0429 12:44:59.504601    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:59.999824    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:59.999824    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:59.999824    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:59.999824    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:00.003417    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:00.003417    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:00.003417    3296 round_trippers.go:580]     Audit-Id: 7447b2af-64d8-4510-88a0-932e5399c7e8
	I0429 12:45:00.003417    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:00.003666    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:00.003666    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:00.003666    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:00.003666    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:00 GMT
	I0429 12:45:00.004352    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:45:00.488108    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:00.488108    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:00.488108    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:00.488108    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:00.493711    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:45:00.494629    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:00.494629    3296 round_trippers.go:580]     Audit-Id: 850ddbf7-5fb9-4d9c-ab44-d2b06881b8ad
	I0429 12:45:00.494629    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:00.494629    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:00.494629    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:00.494629    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:00.494716    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:00 GMT
	I0429 12:45:00.495291    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:45:00.988408    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:00.988408    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:00.988408    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:00.988408    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:00.998088    3296 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 12:45:00.998088    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:00.998088    3296 round_trippers.go:580]     Audit-Id: fae8b223-43af-47e7-a8e1-df95a944f347
	I0429 12:45:00.998088    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:00.998088    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:00.998088    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:00.998088    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:00.998088    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:01 GMT
	I0429 12:45:00.998854    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:45:00.999491    3296 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 12:45:01.487801    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:01.487892    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:01.487892    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:01.487892    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:01.491255    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:01.491255    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:01.491255    3296 round_trippers.go:580]     Audit-Id: b1561a53-6d82-4792-a0cb-a54e5b0add20
	I0429 12:45:01.491255    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:01.491255    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:01.491255    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:01.491255    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:01.491255    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:01 GMT
	I0429 12:45:01.492595    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:45:01.992815    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:01.992815    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:01.992921    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:01.992921    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:01.998238    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:45:01.998831    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:01.998831    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:01.998831    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:01.998831    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:01.998831    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:01.998831    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:02 GMT
	I0429 12:45:01.998831    3296 round_trippers.go:580]     Audit-Id: 48ad1f42-c9c3-4e50-a0f9-f7f8f769e6ae
	I0429 12:45:01.998977    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:01.999717    3296 node_ready.go:49] node "multinode-409200" has status "Ready":"True"
	I0429 12:45:01.999717    3296 node_ready.go:38] duration metric: took 14.0134996s for node "multinode-409200" to be "Ready" ...
	I0429 12:45:01.999717    3296 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:45:01.999951    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods
	I0429 12:45:01.999951    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:01.999951    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:01.999951    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:02.007629    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:45:02.007629    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:02.007629    3296 round_trippers.go:580]     Audit-Id: de4530cf-d6fa-4642-9986-1130033d04d0
	I0429 12:45:02.007629    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:02.007629    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:02.007629    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:02.007629    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:02.007629    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:02 GMT
	I0429 12:45:02.008620    3296 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"406","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0429 12:45:02.014626    3296 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:02.014626    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 12:45:02.014626    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:02.014626    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:02.014626    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:02.018623    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:02.019129    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:02.019129    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:02.019129    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:02.019129    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:02.019129    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:02 GMT
	I0429 12:45:02.019129    3296 round_trippers.go:580]     Audit-Id: c932abd8-5efc-4791-ba87-272407a6105e
	I0429 12:45:02.019129    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:02.019129    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"406","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0429 12:45:02.019760    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:02.019760    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:02.019760    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:02.019760    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:02.022335    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:45:02.022335    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:02.022690    3296 round_trippers.go:580]     Audit-Id: 005cbf7b-dcb5-4c3c-a918-a06dc9d91ff3
	I0429 12:45:02.022690    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:02.022690    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:02.022690    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:02.022690    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:02.022751    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:02 GMT
	I0429 12:45:02.022751    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:02.517658    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 12:45:02.517720    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:02.517720    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:02.517720    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:02.521377    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:02.521377    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:02.521377    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:02.521377    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:02 GMT
	I0429 12:45:02.521910    3296 round_trippers.go:580]     Audit-Id: 0ba0b24e-578a-44e4-aacf-85bf2cbf2f35
	I0429 12:45:02.521910    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:02.521910    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:02.521910    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:02.522201    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"406","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0429 12:45:02.522907    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:02.522979    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:02.522979    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:02.522979    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:02.526376    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:02.527262    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:02.527262    3296 round_trippers.go:580]     Audit-Id: 50407530-6988-4c95-821a-2707da3eebd0
	I0429 12:45:02.527262    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:02.527262    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:02.527262    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:02.527262    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:02.527262    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:02 GMT
	I0429 12:45:02.528375    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.022363    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 12:45:03.022363    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.022363    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.022363    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.027925    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:45:03.027925    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.027925    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.027925    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.027925    3296 round_trippers.go:580]     Audit-Id: ffc9b74e-abb8-4b52-858e-4dc1eebddc20
	I0429 12:45:03.027925    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.027925    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.027925    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.028687    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"406","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0429 12:45:03.029435    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:03.029435    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.029435    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.029435    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.036065    3296 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:45:03.036065    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.036065    3296 round_trippers.go:580]     Audit-Id: d91f1d54-d40b-493d-8b0e-f03031786d88
	I0429 12:45:03.036065    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.036065    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.036065    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.036065    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.036065    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.037065    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.529021    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 12:45:03.529021    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.529021    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.529021    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.534002    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:45:03.534002    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.534002    3296 round_trippers.go:580]     Audit-Id: b7814e9b-660c-42a5-b5fe-46f0ed38acec
	I0429 12:45:03.534002    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.534002    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.534002    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.534233    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.534233    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.534697    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"418","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0429 12:45:03.535742    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:03.535804    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.535804    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.535804    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.548932    3296 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 12:45:03.548932    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.548932    3296 round_trippers.go:580]     Audit-Id: 9ebd281a-5935-4afb-8543-d9008eba601b
	I0429 12:45:03.548932    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.548932    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.548932    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.548932    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.548932    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.549963    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.549963    3296 pod_ready.go:92] pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace has status "Ready":"True"
	I0429 12:45:03.549963    3296 pod_ready.go:81] duration metric: took 1.5353247s for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.549963    3296 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.549963    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-409200
	I0429 12:45:03.549963    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.549963    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.549963    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.569942    3296 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0429 12:45:03.569942    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.569942    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.569942    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.570399    3296 round_trippers.go:580]     Audit-Id: d61b0e30-7689-468f-b6cc-eb51e5e95a41
	I0429 12:45:03.570399    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.570399    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.570399    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.570633    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-409200","namespace":"kube-system","uid":"d181e36d-2901-4660-a441-6f6b5f3d6c5f","resourceVersion":"381","creationTimestamp":"2024-04-29T12:44:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.185.116:2379","kubernetes.io/config.hash":"c66d644ea477a94b97c6ebe1092303ff","kubernetes.io/config.mirror":"c66d644ea477a94b97c6ebe1092303ff","kubernetes.io/config.seen":"2024-04-29T12:44:32.885743739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0429 12:45:03.571180    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:03.571180    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.571180    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.571180    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.574016    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:45:03.574016    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.574016    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.574016    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.574016    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.574016    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.574412    3296 round_trippers.go:580]     Audit-Id: 7b4c4ef3-470b-4d30-a9ca-f03b2d6eeff1
	I0429 12:45:03.574412    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.574576    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.575177    3296 pod_ready.go:92] pod "etcd-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:45:03.575256    3296 pod_ready.go:81] duration metric: took 25.2136ms for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.575256    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.575446    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-409200
	I0429 12:45:03.575446    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.575501    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.575501    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.578015    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:45:03.579015    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.579015    3296 round_trippers.go:580]     Audit-Id: 3eaf2df8-6bf2-489f-88bc-29f366d94d6f
	I0429 12:45:03.579015    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.579015    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.579015    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.579015    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.579015    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.579308    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-409200","namespace":"kube-system","uid":"da427161-547d-4e8d-a545-8b243ce10f12","resourceVersion":"380","creationTimestamp":"2024-04-29T12:44:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.185.116:8443","kubernetes.io/config.hash":"fab3ac6a5694131422285e941b90103f","kubernetes.io/config.mirror":"fab3ac6a5694131422285e941b90103f","kubernetes.io/config.seen":"2024-04-29T12:44:24.392874586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0429 12:45:03.579984    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:03.579984    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.579984    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.580048    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.581541    3296 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 12:45:03.581541    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.581541    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.581541    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.581541    3296 round_trippers.go:580]     Audit-Id: fd8ff95d-ae10-44fc-a86c-dcbc1e1e497c
	I0429 12:45:03.581541    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.582465    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.582465    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.582986    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.584577    3296 pod_ready.go:92] pod "kube-apiserver-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:45:03.584618    3296 pod_ready.go:81] duration metric: took 9.3622ms for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.584618    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.584800    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-409200
	I0429 12:45:03.584800    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.584800    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.584800    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.592420    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:45:03.592420    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.592420    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.592420    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.592420    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.592420    3296 round_trippers.go:580]     Audit-Id: cdfdc759-a918-4f6b-8211-ea8f62b39f8b
	I0429 12:45:03.592420    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.592420    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.593419    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-409200","namespace":"kube-system","uid":"bc75101f-63f2-4b41-a912-4d015c4fd4aa","resourceVersion":"382","creationTimestamp":"2024-04-29T12:44:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.mirror":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.seen":"2024-04-29T12:44:32.885750739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0429 12:45:03.593419    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:03.593419    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.593419    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.593419    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.596453    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:03.596453    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.596453    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.596453    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.596453    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.596453    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.596453    3296 round_trippers.go:580]     Audit-Id: bc0b5cb2-d269-41fb-9405-ceb55c938ed5
	I0429 12:45:03.596453    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.596453    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.596453    3296 pod_ready.go:92] pod "kube-controller-manager-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:45:03.596453    3296 pod_ready.go:81] duration metric: took 11.8345ms for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.596453    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.596453    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 12:45:03.596453    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.596453    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.596453    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.600415    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:03.600415    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.600415    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.600415    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.600415    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.600415    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.600415    3296 round_trippers.go:580]     Audit-Id: e08be7e5-51a1-4fb2-b260-4fd14e037e01
	I0429 12:45:03.600415    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.600415    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g2jp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"d2c926f8-0701-483c-84ae-295e7bb08fc9","resourceVersion":"375","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0429 12:45:03.600415    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:03.600415    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.600415    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.600415    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.604425    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:45:03.604425    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.604425    3296 round_trippers.go:580]     Audit-Id: 9d692b8f-80ae-41bb-a404-3c84c9d38af0
	I0429 12:45:03.604425    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.604425    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.604425    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.604425    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.604425    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.604425    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.604425    3296 pod_ready.go:92] pod "kube-proxy-g2jp8" in "kube-system" namespace has status "Ready":"True"
	I0429 12:45:03.604425    3296 pod_ready.go:81] duration metric: took 7.9727ms for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.604425    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.797053    3296 request.go:629] Waited for 192.3431ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 12:45:03.797345    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 12:45:03.797345    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.797345    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.797345    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.801930    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:45:03.802003    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.802003    3296 round_trippers.go:580]     Audit-Id: e08be03c-41b3-4327-b6ef-628d7a103e75
	I0429 12:45:03.802003    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.802003    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.802003    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.802003    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.802068    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.802122    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-409200","namespace":"kube-system","uid":"6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266","resourceVersion":"379","creationTimestamp":"2024-04-29T12:44:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.mirror":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.seen":"2024-04-29T12:44:24.392867685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0429 12:45:04.001132    3296 request.go:629] Waited for 197.8292ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:04.001300    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:04.001300    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:04.001300    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:04.001300    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:04.005356    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:04.005356    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:04.005356    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:04.005356    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:04.005356    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:04.005356    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:04.005356    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:04 GMT
	I0429 12:45:04.005356    3296 round_trippers.go:580]     Audit-Id: dbd289b4-c74d-48e3-9263-2cb4a6a20a89
	I0429 12:45:04.005938    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:45:04.006601    3296 pod_ready.go:92] pod "kube-scheduler-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:45:04.006677    3296 pod_ready.go:81] duration metric: took 402.2485ms for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:04.006677    3296 pod_ready.go:38] duration metric: took 2.0069438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:45:04.006751    3296 api_server.go:52] waiting for apiserver process to appear ...
	I0429 12:45:04.020619    3296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:45:04.052599    3296 command_runner.go:130] > 2065
	I0429 12:45:04.053460    3296 api_server.go:72] duration metric: took 16.9303268s to wait for apiserver process to appear ...
	I0429 12:45:04.053544    3296 api_server.go:88] waiting for apiserver healthz status ...
	I0429 12:45:04.053619    3296 api_server.go:253] Checking apiserver healthz at https://172.26.185.116:8443/healthz ...
	I0429 12:45:04.064712    3296 api_server.go:279] https://172.26.185.116:8443/healthz returned 200:
	ok
	I0429 12:45:04.065147    3296 round_trippers.go:463] GET https://172.26.185.116:8443/version
	I0429 12:45:04.065147    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:04.065147    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:04.065147    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:04.066701    3296 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 12:45:04.066701    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:04.066701    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:04.066701    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:04.067336    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:04.067336    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:04.067336    3296 round_trippers.go:580]     Content-Length: 263
	I0429 12:45:04.067336    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:04 GMT
	I0429 12:45:04.067336    3296 round_trippers.go:580]     Audit-Id: 89817c99-cc06-411d-b40d-f89432a8d119
	I0429 12:45:04.067336    3296 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 12:45:04.067490    3296 api_server.go:141] control plane version: v1.30.0
	I0429 12:45:04.067599    3296 api_server.go:131] duration metric: took 14.0544ms to wait for apiserver health ...
	I0429 12:45:04.067645    3296 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 12:45:04.206940    3296 request.go:629] Waited for 139.2937ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods
	I0429 12:45:04.207142    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods
	I0429 12:45:04.207142    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:04.207142    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:04.207142    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:04.212478    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:45:04.212478    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:04.212478    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:04.212478    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:04.212478    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:04.212478    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:04.212478    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:04 GMT
	I0429 12:45:04.212478    3296 round_trippers.go:580]     Audit-Id: ec425d83-0f1a-431c-b584-2765f718b45d
	I0429 12:45:04.215302    3296 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"418","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0429 12:45:04.218767    3296 system_pods.go:59] 8 kube-system pods found
	I0429 12:45:04.218767    3296 system_pods.go:61] "coredns-7db6d8ff4d-ctb8n" [1141a626-d4ac-4826-a912-7b7ed378b013] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "etcd-multinode-409200" [d181e36d-2901-4660-a441-6f6b5f3d6c5f] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "kindnet-xj48j" [adefd380-e946-47ff-b57c-3baa04e6f99c] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "kube-apiserver-multinode-409200" [da427161-547d-4e8d-a545-8b243ce10f12] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "kube-controller-manager-multinode-409200" [bc75101f-63f2-4b41-a912-4d015c4fd4aa] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "kube-proxy-g2jp8" [d2c926f8-0701-483c-84ae-295e7bb08fc9] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "kube-scheduler-multinode-409200" [6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "storage-provisioner" [a200a31d-7fe5-4ebd-b4ea-f8ae593de3f9] Running
	I0429 12:45:04.218767    3296 system_pods.go:74] duration metric: took 151.1208ms to wait for pod list to return data ...
	I0429 12:45:04.218767    3296 default_sa.go:34] waiting for default service account to be created ...
	I0429 12:45:04.407309    3296 request.go:629] Waited for 188.5405ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/default/serviceaccounts
	I0429 12:45:04.407617    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/default/serviceaccounts
	I0429 12:45:04.407617    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:04.407617    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:04.407617    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:04.411864    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:45:04.411864    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:04.411864    3296 round_trippers.go:580]     Audit-Id: c1ebd2d8-a1e9-4583-a374-eec2950e9945
	I0429 12:45:04.411864    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:04.411864    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:04.411864    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:04.411864    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:04.411864    3296 round_trippers.go:580]     Content-Length: 261
	I0429 12:45:04.411864    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:04 GMT
	I0429 12:45:04.411864    3296 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1c200474-8705-40aa-8512-ec20a74a9ff0","resourceVersion":"323","creationTimestamp":"2024-04-29T12:44:46Z"}}]}
	I0429 12:45:04.411864    3296 default_sa.go:45] found service account: "default"
	I0429 12:45:04.411864    3296 default_sa.go:55] duration metric: took 193.0951ms for default service account to be created ...
	I0429 12:45:04.411864    3296 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 12:45:04.596858    3296 request.go:629] Waited for 184.2596ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods
	I0429 12:45:04.596972    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods
	I0429 12:45:04.596972    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:04.597047    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:04.597047    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:04.602297    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:45:04.602297    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:04.602297    3296 round_trippers.go:580]     Audit-Id: 9d8cdeef-7574-4196-af75-9235e7830d44
	I0429 12:45:04.602297    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:04.602297    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:04.602297    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:04.602297    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:04.602297    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:04 GMT
	I0429 12:45:04.604234    3296 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"418","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0429 12:45:04.607126    3296 system_pods.go:86] 8 kube-system pods found
	I0429 12:45:04.607339    3296 system_pods.go:89] "coredns-7db6d8ff4d-ctb8n" [1141a626-d4ac-4826-a912-7b7ed378b013] Running
	I0429 12:45:04.607339    3296 system_pods.go:89] "etcd-multinode-409200" [d181e36d-2901-4660-a441-6f6b5f3d6c5f] Running
	I0429 12:45:04.607339    3296 system_pods.go:89] "kindnet-xj48j" [adefd380-e946-47ff-b57c-3baa04e6f99c] Running
	I0429 12:45:04.607339    3296 system_pods.go:89] "kube-apiserver-multinode-409200" [da427161-547d-4e8d-a545-8b243ce10f12] Running
	I0429 12:45:04.607339    3296 system_pods.go:89] "kube-controller-manager-multinode-409200" [bc75101f-63f2-4b41-a912-4d015c4fd4aa] Running
	I0429 12:45:04.607339    3296 system_pods.go:89] "kube-proxy-g2jp8" [d2c926f8-0701-483c-84ae-295e7bb08fc9] Running
	I0429 12:45:04.607528    3296 system_pods.go:89] "kube-scheduler-multinode-409200" [6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266] Running
	I0429 12:45:04.607528    3296 system_pods.go:89] "storage-provisioner" [a200a31d-7fe5-4ebd-b4ea-f8ae593de3f9] Running
	I0429 12:45:04.607528    3296 system_pods.go:126] duration metric: took 195.6622ms to wait for k8s-apps to be running ...
	I0429 12:45:04.607528    3296 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 12:45:04.620020    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:45:04.646095    3296 system_svc.go:56] duration metric: took 38.5674ms WaitForService to wait for kubelet
	I0429 12:45:04.646095    3296 kubeadm.go:576] duration metric: took 17.5229576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:45:04.646246    3296 node_conditions.go:102] verifying NodePressure condition ...
	I0429 12:45:04.798355    3296 request.go:629] Waited for 151.798ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/nodes
	I0429 12:45:04.798670    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes
	I0429 12:45:04.798670    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:04.798670    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:04.798670    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:04.807122    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:45:04.807122    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:04.807122    3296 round_trippers.go:580]     Audit-Id: 27cd78e8-c916-4718-a2ef-21649bddc2f7
	I0429 12:45:04.807122    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:04.807122    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:04.807122    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:04.807122    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:04.807122    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:04 GMT
	I0429 12:45:04.807122    3296 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5013 chars]
	I0429 12:45:04.808046    3296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:45:04.808046    3296 node_conditions.go:123] node cpu capacity is 2
	I0429 12:45:04.808046    3296 node_conditions.go:105] duration metric: took 161.5684ms to run NodePressure ...
	I0429 12:45:04.808566    3296 start.go:240] waiting for startup goroutines ...
	I0429 12:45:04.808566    3296 start.go:245] waiting for cluster config update ...
	I0429 12:45:04.808566    3296 start.go:254] writing updated cluster config ...
	I0429 12:45:04.812510    3296 out.go:177] 
	I0429 12:45:04.815299    3296 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:45:04.823733    3296 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:45:04.824679    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 12:45:04.829645    3296 out.go:177] * Starting "multinode-409200-m02" worker node in "multinode-409200" cluster
	I0429 12:45:04.832482    3296 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:45:04.832482    3296 cache.go:56] Caching tarball of preloaded images
	I0429 12:45:04.833541    3296 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:45:04.833752    3296 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:45:04.833905    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 12:45:04.840362    3296 start.go:360] acquireMachinesLock for multinode-409200-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:45:04.840766    3296 start.go:364] duration metric: took 208µs to acquireMachinesLock for "multinode-409200-m02"
	I0429 12:45:04.841073    3296 start.go:93] Provisioning new machine with config: &{Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 12:45:04.841073    3296 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 12:45:04.844315    3296 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:45:04.844315    3296 start.go:159] libmachine.API.Create for "multinode-409200" (driver="hyperv")
	I0429 12:45:04.844315    3296 client.go:168] LocalClient.Create starting
	I0429 12:45:04.845902    3296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 12:45:04.846017    3296 main.go:141] libmachine: Decoding PEM data...
	I0429 12:45:04.846017    3296 main.go:141] libmachine: Parsing certificate...
	I0429 12:45:04.846017    3296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 12:45:04.846017    3296 main.go:141] libmachine: Decoding PEM data...
	I0429 12:45:04.846017    3296 main.go:141] libmachine: Parsing certificate...
	I0429 12:45:04.846673    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 12:45:06.804924    3296 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 12:45:06.805314    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:06.805397    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 12:45:08.591204    3296 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 12:45:08.591826    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:08.592018    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 12:45:10.108966    3296 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 12:45:10.109042    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:10.109101    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 12:45:13.848914    3296 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 12:45:13.848914    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:13.851597    3296 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:45:14.372678    3296 main.go:141] libmachine: Creating SSH key...
	I0429 12:45:15.046114    3296 main.go:141] libmachine: Creating VM...
	I0429 12:45:15.046114    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 12:45:18.024556    3296 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 12:45:18.024648    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:18.024648    3296 main.go:141] libmachine: Using switch "Default Switch"
	I0429 12:45:18.024648    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 12:45:19.830813    3296 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 12:45:19.830994    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:19.830994    3296 main.go:141] libmachine: Creating VHD
	I0429 12:45:19.831082    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 12:45:23.499514    3296 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 8687AA4C-C137-44FB-9D96-F96300160B58
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 12:45:23.499514    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:23.499514    3296 main.go:141] libmachine: Writing magic tar header
	I0429 12:45:23.499514    3296 main.go:141] libmachine: Writing SSH key tar header
	I0429 12:45:23.509648    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 12:45:26.692403    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:26.692627    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:26.692685    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\disk.vhd' -SizeBytes 20000MB
	I0429 12:45:29.255187    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:29.255187    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:29.255187    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-409200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 12:45:32.923583    3296 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-409200-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 12:45:32.923583    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:32.923583    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-409200-m02 -DynamicMemoryEnabled $false
	I0429 12:45:35.190827    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:35.190827    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:35.190827    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-409200-m02 -Count 2
	I0429 12:45:37.361678    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:37.361678    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:37.362213    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-409200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\boot2docker.iso'
	I0429 12:45:39.984208    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:39.984208    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:39.984208    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-409200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\disk.vhd'
	I0429 12:45:42.658479    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:42.659184    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:42.659184    3296 main.go:141] libmachine: Starting VM...
	I0429 12:45:42.659184    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-409200-m02
	I0429 12:45:45.749580    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:45.750057    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:45.750057    3296 main.go:141] libmachine: Waiting for host to start...
	I0429 12:45:45.750057    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:45:48.069884    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:45:48.069884    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:48.070148    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:45:50.607310    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:50.607310    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:51.618434    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:45:53.814057    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:45:53.814268    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:53.814268    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:45:56.396318    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:56.396408    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:57.400139    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:45:59.628138    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:45:59.629129    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:59.629209    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:02.151932    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:46:02.152954    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:03.162424    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:05.370899    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:05.370899    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:05.370899    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:07.948312    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:46:07.949519    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:08.958127    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:11.175506    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:11.175506    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:11.175506    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:13.895916    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:13.895916    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:13.896838    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:16.080488    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:16.080488    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:16.080488    3296 machine.go:94] provisionDockerMachine start ...
	I0429 12:46:16.080488    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:18.280232    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:18.280232    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:18.280232    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:20.885470    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:20.885470    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:20.892986    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:46:20.905116    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:46:20.905163    3296 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 12:46:21.028078    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 12:46:21.028078    3296 buildroot.go:166] provisioning hostname "multinode-409200-m02"
	I0429 12:46:21.028078    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:23.222003    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:23.222672    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:23.222863    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:25.813982    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:25.814625    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:25.820174    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:46:25.820865    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:46:25.820865    3296 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-409200-m02 && echo "multinode-409200-m02" | sudo tee /etc/hostname
	I0429 12:46:25.976952    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-409200-m02
	
	I0429 12:46:25.977060    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:28.125621    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:28.125621    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:28.125621    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:30.696159    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:30.696159    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:30.703315    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:46:30.703999    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:46:30.703999    3296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-409200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-409200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-409200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:46:30.842446    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:46:30.842446    3296 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 12:46:30.842446    3296 buildroot.go:174] setting up certificates
	I0429 12:46:30.842446    3296 provision.go:84] configureAuth start
	I0429 12:46:30.842446    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:32.965275    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:32.966273    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:32.966273    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:35.565457    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:35.565565    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:35.565565    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:37.707815    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:37.708682    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:37.708741    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:40.310992    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:40.311263    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:40.311263    3296 provision.go:143] copyHostCerts
	I0429 12:46:40.311498    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 12:46:40.312060    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 12:46:40.312148    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 12:46:40.312647    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 12:46:40.313776    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 12:46:40.313776    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 12:46:40.313776    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 12:46:40.314652    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 12:46:40.315444    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 12:46:40.316176    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 12:46:40.316176    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 12:46:40.316251    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 12:46:40.317490    3296 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-409200-m02 san=[127.0.0.1 172.26.183.208 localhost minikube multinode-409200-m02]
	I0429 12:46:40.489533    3296 provision.go:177] copyRemoteCerts
	I0429 12:46:40.500914    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:46:40.500914    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:42.648444    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:42.648500    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:42.648500    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:45.288552    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:45.288887    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:45.289051    3296 sshutil.go:53] new ssh client: &{IP:172.26.183.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\id_rsa Username:docker}
	I0429 12:46:45.400108    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8991564s)
	I0429 12:46:45.400108    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 12:46:45.400765    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 12:46:45.454027    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 12:46:45.454114    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0429 12:46:45.506432    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 12:46:45.506860    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 12:46:45.561859    3296 provision.go:87] duration metric: took 14.7192983s to configureAuth
	I0429 12:46:45.561945    3296 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:46:45.562643    3296 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:46:45.562708    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:47.764121    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:47.764121    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:47.765116    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:50.332541    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:50.332943    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:50.339542    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:46:50.339686    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:46:50.339686    3296 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:46:50.481784    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:46:50.481784    3296 buildroot.go:70] root file system type: tmpfs
	I0429 12:46:50.482020    3296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:46:50.482148    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:52.705401    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:52.705401    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:52.706452    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:55.300419    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:55.300611    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:55.307533    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:46:55.307664    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:46:55.307664    3296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.26.185.116"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:46:55.472533    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.26.185.116
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:46:55.472683    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:57.597428    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:57.597485    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:57.597485    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:00.149249    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:00.149249    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:00.156116    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:00.156454    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:47:00.156454    3296 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:47:02.385337    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:47:02.385337    3296 machine.go:97] duration metric: took 46.3044876s to provisionDockerMachine
	I0429 12:47:02.385440    3296 client.go:171] duration metric: took 1m57.5392233s to LocalClient.Create
	I0429 12:47:02.385440    3296 start.go:167] duration metric: took 1m57.5402109s to libmachine.API.Create "multinode-409200"
	I0429 12:47:02.385525    3296 start.go:293] postStartSetup for "multinode-409200-m02" (driver="hyperv")
	I0429 12:47:02.385566    3296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:47:02.399065    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:47:02.399065    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:04.523660    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:04.523741    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:04.523741    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:07.089102    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:07.089102    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:07.089875    3296 sshutil.go:53] new ssh client: &{IP:172.26.183.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\id_rsa Username:docker}
	I0429 12:47:07.199491    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8003032s)
	I0429 12:47:07.213686    3296 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:47:07.222251    3296 command_runner.go:130] > NAME=Buildroot
	I0429 12:47:07.222251    3296 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 12:47:07.222251    3296 command_runner.go:130] > ID=buildroot
	I0429 12:47:07.222251    3296 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 12:47:07.222251    3296 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 12:47:07.222251    3296 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:47:07.222251    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 12:47:07.222845    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 12:47:07.223880    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 12:47:07.223966    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 12:47:07.236998    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:47:07.258879    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 12:47:07.309777    3296 start.go:296] duration metric: took 4.9241487s for postStartSetup
	I0429 12:47:07.312753    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:09.488725    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:09.490183    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:09.490183    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:12.098690    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:12.098690    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:12.098925    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 12:47:12.101723    3296 start.go:128] duration metric: took 2m7.2596607s to createHost
	I0429 12:47:12.101862    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:14.247044    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:14.247280    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:14.247280    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:16.891372    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:16.891372    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:16.899005    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:16.899165    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:47:16.899165    3296 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:47:17.035974    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714394837.032590502
	
	I0429 12:47:17.036116    3296 fix.go:216] guest clock: 1714394837.032590502
	I0429 12:47:17.036116    3296 fix.go:229] Guest: 2024-04-29 12:47:17.032590502 +0000 UTC Remote: 2024-04-29 12:47:12.1017238 +0000 UTC m=+348.223296901 (delta=4.930866702s)
	I0429 12:47:17.036116    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:19.226390    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:19.226390    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:19.226772    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:21.808839    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:21.808839    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:21.815751    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:21.815751    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:47:21.815751    3296 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714394837
	I0429 12:47:21.956676    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 12:47:17 UTC 2024
	
	I0429 12:47:21.956676    3296 fix.go:236] clock set: Mon Apr 29 12:47:17 UTC 2024
	 (err=<nil>)
	I0429 12:47:21.956676    3296 start.go:83] releasing machines lock for "multinode-409200-m02", held for 2m17.1145367s
	I0429 12:47:21.956676    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:24.097712    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:24.097787    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:24.097844    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:26.668092    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:26.668471    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:26.671159    3296 out.go:177] * Found network options:
	I0429 12:47:26.674110    3296 out.go:177]   - NO_PROXY=172.26.185.116
	W0429 12:47:26.677026    3296 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:47:26.679171    3296 out.go:177]   - NO_PROXY=172.26.185.116
	W0429 12:47:26.681764    3296 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 12:47:26.683273    3296 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:47:26.686471    3296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:47:26.686593    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:26.698860    3296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 12:47:26.699858    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:28.889929    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:28.889929    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:28.890453    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:28.924115    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:28.924115    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:28.924115    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:31.610321    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:31.610321    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:31.611316    3296 sshutil.go:53] new ssh client: &{IP:172.26.183.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\id_rsa Username:docker}
	I0429 12:47:31.638337    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:31.638337    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:31.638337    3296 sshutil.go:53] new ssh client: &{IP:172.26.183.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\id_rsa Username:docker}
	I0429 12:47:31.772452    3296 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 12:47:31.772719    3296 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0429 12:47:31.772719    3296 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.073819s)
	I0429 12:47:31.772719    3296 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0862083s)
	W0429 12:47:31.772719    3296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:47:31.788178    3296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:47:31.823416    3296 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 12:47:31.823555    3296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:47:31.823638    3296 start.go:494] detecting cgroup driver to use...
	I0429 12:47:31.823807    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:47:31.864032    3296 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 12:47:31.877665    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:47:31.915242    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:47:31.938595    3296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:47:31.951828    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:47:31.988717    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:47:32.025441    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:47:32.061177    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:47:32.097777    3296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:47:32.133151    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:47:32.172091    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:47:32.207948    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:47:32.240923    3296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:47:32.262425    3296 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 12:47:32.275413    3296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:47:32.307262    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:32.522716    3296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:47:32.557110    3296 start.go:494] detecting cgroup driver to use...
	I0429 12:47:32.569222    3296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:47:32.595469    3296 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 12:47:32.595469    3296 command_runner.go:130] > [Unit]
	I0429 12:47:32.595469    3296 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 12:47:32.595469    3296 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 12:47:32.595469    3296 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 12:47:32.595469    3296 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 12:47:32.595469    3296 command_runner.go:130] > StartLimitBurst=3
	I0429 12:47:32.595469    3296 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 12:47:32.595469    3296 command_runner.go:130] > [Service]
	I0429 12:47:32.595469    3296 command_runner.go:130] > Type=notify
	I0429 12:47:32.595469    3296 command_runner.go:130] > Restart=on-failure
	I0429 12:47:32.595469    3296 command_runner.go:130] > Environment=NO_PROXY=172.26.185.116
	I0429 12:47:32.595469    3296 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 12:47:32.595469    3296 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 12:47:32.595469    3296 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 12:47:32.595469    3296 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 12:47:32.595469    3296 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 12:47:32.595469    3296 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 12:47:32.595469    3296 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 12:47:32.595469    3296 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 12:47:32.595469    3296 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 12:47:32.595469    3296 command_runner.go:130] > ExecStart=
	I0429 12:47:32.595469    3296 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 12:47:32.595469    3296 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 12:47:32.595469    3296 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 12:47:32.595469    3296 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 12:47:32.595469    3296 command_runner.go:130] > LimitNOFILE=infinity
	I0429 12:47:32.595469    3296 command_runner.go:130] > LimitNPROC=infinity
	I0429 12:47:32.595469    3296 command_runner.go:130] > LimitCORE=infinity
	I0429 12:47:32.595469    3296 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 12:47:32.595469    3296 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 12:47:32.595469    3296 command_runner.go:130] > TasksMax=infinity
	I0429 12:47:32.595469    3296 command_runner.go:130] > TimeoutStartSec=0
	I0429 12:47:32.595469    3296 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 12:47:32.595469    3296 command_runner.go:130] > Delegate=yes
	I0429 12:47:32.595469    3296 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 12:47:32.595469    3296 command_runner.go:130] > KillMode=process
	I0429 12:47:32.596007    3296 command_runner.go:130] > [Install]
	I0429 12:47:32.596007    3296 command_runner.go:130] > WantedBy=multi-user.target
	I0429 12:47:32.609427    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:47:32.647237    3296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:47:32.693747    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:47:32.746682    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:47:32.787047    3296 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:47:32.851495    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:47:32.878304    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:47:32.915396    3296 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 12:47:32.926454    3296 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:47:32.932598    3296 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 12:47:32.945905    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:47:32.962828    3296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:47:33.010724    3296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:47:33.221170    3296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:47:33.427612    3296 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:47:33.427711    3296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:47:33.476840    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:33.689420    3296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:47:36.262572    3296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5730625s)
	I0429 12:47:36.276209    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 12:47:36.315570    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:47:36.358605    3296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 12:47:36.588183    3296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 12:47:36.818736    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:37.036451    3296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 12:47:37.082193    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:47:37.120569    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:37.346985    3296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 12:47:37.463969    3296 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 12:47:37.477867    3296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 12:47:37.487999    3296 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 12:47:37.488182    3296 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 12:47:37.488229    3296 command_runner.go:130] > Device: 0,22	Inode: 883         Links: 1
	I0429 12:47:37.488229    3296 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 12:47:37.488229    3296 command_runner.go:130] > Access: 2024-04-29 12:47:37.365902792 +0000
	I0429 12:47:37.488229    3296 command_runner.go:130] > Modify: 2024-04-29 12:47:37.365902792 +0000
	I0429 12:47:37.488229    3296 command_runner.go:130] > Change: 2024-04-29 12:47:37.370902716 +0000
	I0429 12:47:37.488280    3296 command_runner.go:130] >  Birth: -
	I0429 12:47:37.488280    3296 start.go:562] Will wait 60s for crictl version
	I0429 12:47:37.501342    3296 ssh_runner.go:195] Run: which crictl
	I0429 12:47:37.507515    3296 command_runner.go:130] > /usr/bin/crictl
	I0429 12:47:37.521938    3296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:47:37.586264    3296 command_runner.go:130] > Version:  0.1.0
	I0429 12:47:37.586344    3296 command_runner.go:130] > RuntimeName:  docker
	I0429 12:47:37.586344    3296 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 12:47:37.586344    3296 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 12:47:37.586344    3296 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 12:47:37.596233    3296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:47:37.630211    3296 command_runner.go:130] > 26.0.2
	I0429 12:47:37.640278    3296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:47:37.670207    3296 command_runner.go:130] > 26.0.2
	I0429 12:47:37.673376    3296 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 12:47:37.677101    3296 out.go:177]   - env NO_PROXY=172.26.185.116
	I0429 12:47:37.680928    3296 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 12:47:37.685397    3296 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 12:47:37.685397    3296 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 12:47:37.685397    3296 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 12:47:37.685397    3296 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:da:8e:53 Flags:up|broadcast|multicast|running}
	I0429 12:47:37.687846    3296 ip.go:210] interface addr: fe80::e4d4:6d70:21fb:68f3/64
	I0429 12:47:37.687846    3296 ip.go:210] interface addr: 172.26.176.1/20
	I0429 12:47:37.704000    3296 ssh_runner.go:195] Run: grep 172.26.176.1	host.minikube.internal$ /etc/hosts
	I0429 12:47:37.711121    3296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:47:37.732971    3296 mustload.go:65] Loading cluster: multinode-409200
	I0429 12:47:37.733916    3296 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:47:37.734674    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:47:39.857268    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:39.857605    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:39.857678    3296 host.go:66] Checking if "multinode-409200" exists ...
	I0429 12:47:39.858356    3296 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200 for IP: 172.26.183.208
	I0429 12:47:39.858356    3296 certs.go:194] generating shared ca certs ...
	I0429 12:47:39.858356    3296 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:39.858892    3296 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 12:47:39.859101    3296 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 12:47:39.859625    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:47:39.859897    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:47:39.860079    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:47:39.860187    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:47:39.860949    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem (1338 bytes)
	W0429 12:47:39.861313    3296 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496_empty.pem, impossibly tiny 0 bytes
	I0429 12:47:39.861522    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0429 12:47:39.861732    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 12:47:39.862267    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 12:47:39.862709    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 12:47:39.863370    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem (1708 bytes)
	I0429 12:47:39.863492    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /usr/share/ca-certificates/84962.pem
	I0429 12:47:39.863492    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:39.863492    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem -> /usr/share/ca-certificates/8496.pem
	I0429 12:47:39.864171    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:47:39.918166    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 12:47:39.972223    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:47:40.026549    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 12:47:40.080173    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /usr/share/ca-certificates/84962.pem (1708 bytes)
	I0429 12:47:40.130915    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:47:40.185551    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem --> /usr/share/ca-certificates/8496.pem (1338 bytes)
	I0429 12:47:40.257051    3296 ssh_runner.go:195] Run: openssl version
	I0429 12:47:40.266345    3296 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 12:47:40.281476    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84962.pem && ln -fs /usr/share/ca-certificates/84962.pem /etc/ssl/certs/84962.pem"
	I0429 12:47:40.326424    3296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84962.pem
	I0429 12:47:40.333587    3296 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 12:47:40.333690    3296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 12:47:40.347944    3296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84962.pem
	I0429 12:47:40.358483    3296 command_runner.go:130] > 3ec20f2e
	I0429 12:47:40.372154    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84962.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:47:40.407197    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:47:40.445101    3296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:40.454036    3296 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:40.454036    3296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:40.469854    3296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:40.480159    3296 command_runner.go:130] > b5213941
	I0429 12:47:40.494559    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:47:40.530557    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8496.pem && ln -fs /usr/share/ca-certificates/8496.pem /etc/ssl/certs/8496.pem"
	I0429 12:47:40.568929    3296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8496.pem
	I0429 12:47:40.576708    3296 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 12:47:40.576777    3296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 12:47:40.591154    3296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8496.pem
	I0429 12:47:40.603648    3296 command_runner.go:130] > 51391683
	I0429 12:47:40.618109    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8496.pem /etc/ssl/certs/51391683.0"
	I0429 12:47:40.655162    3296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:47:40.663232    3296 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:47:40.663949    3296 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:47:40.664121    3296 kubeadm.go:928] updating node {m02 172.26.183.208 8443 v1.30.0 docker false true} ...
	I0429 12:47:40.664340    3296 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-409200-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.183.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:47:40.679470    3296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:47:40.699614    3296 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	I0429 12:47:40.699653    3296 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 12:47:40.713121    3296 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 12:47:40.732579    3296 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 12:47:40.732665    3296 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0429 12:47:40.732665    3296 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0429 12:47:40.732818    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 12:47:40.732874    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 12:47:40.753036    3296 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 12:47:40.754002    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:47:40.754181    3296 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 12:47:40.760880    3296 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 12:47:40.760982    3296 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 12:47:40.760982    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 12:47:40.812628    3296 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 12:47:40.812855    3296 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 12:47:40.812775    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 12:47:40.812925    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 12:47:40.826950    3296 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 12:47:40.886206    3296 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 12:47:40.886795    3296 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 12:47:40.886795    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 12:47:42.201325    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0429 12:47:42.222840    3296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0429 12:47:42.257904    3296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:47:42.307563    3296 ssh_runner.go:195] Run: grep 172.26.185.116	control-plane.minikube.internal$ /etc/hosts
	I0429 12:47:42.317117    3296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.185.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:47:42.358165    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:42.591954    3296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:47:42.625305    3296 host.go:66] Checking if "multinode-409200" exists ...
	I0429 12:47:42.626025    3296 start.go:316] joinCluster: &{Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:47:42.626218    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 12:47:42.626311    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:47:44.890078    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:44.890621    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:44.890685    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:47.528269    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:47:47.528349    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:47.528435    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:47:47.732559    3296 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token llghd5.xhmkaosfb4roq849 --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a 
	I0429 12:47:47.732559    3296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.106248s)
	I0429 12:47:47.732559    3296 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 12:47:47.732559    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token llghd5.xhmkaosfb4roq849 --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-409200-m02"
	I0429 12:47:47.979768    3296 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:47:49.375923    3296 command_runner.go:130] > [preflight] Running pre-flight checks
	I0429 12:47:49.375982    3296 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0429 12:47:49.375982    3296 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0429 12:47:49.375982    3296 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:47:49.376091    3296 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:47:49.376091    3296 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 12:47:49.376091    3296 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:47:49.376091    3296 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001862975s
	I0429 12:47:49.376160    3296 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0429 12:47:49.376160    3296 command_runner.go:130] > This node has joined the cluster:
	I0429 12:47:49.376327    3296 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0429 12:47:49.376386    3296 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0429 12:47:49.376386    3296 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0429 12:47:49.376447    3296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token llghd5.xhmkaosfb4roq849 --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-409200-m02": (1.6438751s)
	I0429 12:47:49.376563    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 12:47:49.610407    3296 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0429 12:47:49.839420    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-409200-m02 minikube.k8s.io/updated_at=2024_04_29T12_47_49_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d minikube.k8s.io/name=multinode-409200 minikube.k8s.io/primary=false
	I0429 12:47:49.973372    3296 command_runner.go:130] > node/multinode-409200-m02 labeled
	I0429 12:47:49.973997    3296 start.go:318] duration metric: took 7.3479147s to joinCluster
	I0429 12:47:49.974087    3296 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 12:47:49.980586    3296 out.go:177] * Verifying Kubernetes components...
	I0429 12:47:49.974888    3296 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:47:49.995587    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:50.223514    3296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:47:50.254494    3296 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 12:47:50.257555    3296 kapi.go:59] client config for multinode-409200: &rest.Config{Host:"https://172.26.185.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:47:50.258504    3296 node_ready.go:35] waiting up to 6m0s for node "multinode-409200-m02" to be "Ready" ...
	I0429 12:47:50.258504    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:50.258504    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:50.258504    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:50.258504    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:50.277046    3296 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 12:47:50.277046    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:50.277046    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:50.277046    3296 round_trippers.go:580]     Content-Length: 3921
	I0429 12:47:50.277046    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:50 GMT
	I0429 12:47:50.277046    3296 round_trippers.go:580]     Audit-Id: 1aa7904c-6305-4ec6-bae7-4b076ad2e827
	I0429 12:47:50.277046    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:50.277046    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:50.277046    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:50.277551    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"583","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0429 12:47:50.773175    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:50.773175    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:50.773175    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:50.773175    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:50.777032    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:47:50.777685    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:50.777685    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:50 GMT
	I0429 12:47:50.777685    3296 round_trippers.go:580]     Audit-Id: e4937045-09b1-472b-9826-805039567d77
	I0429 12:47:50.777685    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:50.777685    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:50.777685    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:50.777685    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:50.777685    3296 round_trippers.go:580]     Content-Length: 3921
	I0429 12:47:50.777825    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"583","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0429 12:47:51.273583    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:51.273583    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:51.273583    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:51.273583    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:51.282795    3296 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 12:47:51.283663    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:51.283663    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:51.283663    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:51.283663    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:51.283663    3296 round_trippers.go:580]     Content-Length: 3921
	I0429 12:47:51.283663    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:51 GMT
	I0429 12:47:51.283663    3296 round_trippers.go:580]     Audit-Id: 03d6bb61-7e8f-4b5e-8dc6-ad4f82291662
	I0429 12:47:51.283748    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:51.283845    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"583","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0429 12:47:51.760217    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:51.760217    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:51.760217    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:51.760217    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:51.767312    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:47:51.767938    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:51.767938    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:51 GMT
	I0429 12:47:51.767938    3296 round_trippers.go:580]     Audit-Id: d0e8cb5b-5939-44df-ad18-19d37e8cba55
	I0429 12:47:51.767938    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:51.767987    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:51.767987    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:51.767987    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:51.768019    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:51.768019    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:52.261334    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:52.261334    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:52.261334    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:52.261334    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:52.267062    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:47:52.267062    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:52.267062    3296 round_trippers.go:580]     Audit-Id: 25f6fc60-cd0b-4848-9be9-c476f74565e8
	I0429 12:47:52.267062    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:52.267451    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:52.267451    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:52.267451    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:52.267451    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:52.267451    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:52 GMT
	I0429 12:47:52.267451    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:52.267987    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:47:52.762151    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:52.762704    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:52.762704    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:52.762704    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:52.766898    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:52.766898    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:52.767375    3296 round_trippers.go:580]     Audit-Id: 14d1a1d4-c569-4523-ae8f-85dcf4ae0441
	I0429 12:47:52.767375    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:52.767375    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:52.767375    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:52.767375    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:52.767375    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:52.767375    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:52 GMT
	I0429 12:47:52.767548    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:53.265576    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:53.265576    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:53.265576    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:53.265576    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:53.270147    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:53.270556    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:53.270556    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:53.270556    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:53.270556    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:53.270556    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:53.270647    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:53 GMT
	I0429 12:47:53.270647    3296 round_trippers.go:580]     Audit-Id: 1c2e0aaf-61ef-4ae0-8c6b-2a6ebe793d07
	I0429 12:47:53.270647    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:53.270797    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:53.772050    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:53.772115    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:53.772115    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:53.772115    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:53.776298    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:53.776298    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:53.776407    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:53.776407    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:53.776407    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:53.776407    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:53.776407    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:53.776407    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:53 GMT
	I0429 12:47:53.776556    3296 round_trippers.go:580]     Audit-Id: 650c8085-cdb7-4f97-be72-505b96355229
	I0429 12:47:53.776664    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:54.272406    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:54.272406    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:54.272406    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:54.272406    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:54.277043    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:54.277043    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:54.277043    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:54.277043    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:54.277043    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:54 GMT
	I0429 12:47:54.277721    3296 round_trippers.go:580]     Audit-Id: efeda477-6b13-4a7f-8e6e-8ca984d592e0
	I0429 12:47:54.277721    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:54.277721    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:54.277721    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:54.277805    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:54.277899    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:47:54.761952    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:54.761952    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:54.761952    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:54.761952    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:54.770579    3296 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 12:47:54.770847    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:54.770847    3296 round_trippers.go:580]     Audit-Id: b2ebdb7e-6e55-4343-9e4f-d6ca42f04044
	I0429 12:47:54.770847    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:54.770847    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:54.770847    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:54.770847    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:54.770934    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:54.770934    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:54 GMT
	I0429 12:47:54.771140    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:55.267085    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:55.267085    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:55.267085    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:55.267085    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:55.274773    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:47:55.274773    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:55.274773    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:55.274773    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:55 GMT
	I0429 12:47:55.274773    3296 round_trippers.go:580]     Audit-Id: 8d01f411-0e28-4e00-98c7-c840216695b8
	I0429 12:47:55.274773    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:55.274773    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:55.274773    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:55.274773    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:55.275758    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:55.766800    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:55.766860    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:55.766860    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:55.766860    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:55.772166    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:47:55.772166    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:55.772166    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:55.772166    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:55.772166    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:55.772166    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:55.772166    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:55.772166    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:55 GMT
	I0429 12:47:55.772166    3296 round_trippers.go:580]     Audit-Id: 834aa731-1ee4-4c74-ade3-554a90de45da
	I0429 12:47:55.772166    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:56.259313    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:56.259397    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:56.259397    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:56.259397    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:56.262998    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:47:56.262998    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:56.262998    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:56.262998    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:56 GMT
	I0429 12:47:56.262998    3296 round_trippers.go:580]     Audit-Id: bd25cc06-f1d2-4ce8-b018-c6ceca63b38b
	I0429 12:47:56.262998    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:56.262998    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:56.262998    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:56.262998    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:56.262998    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:56.766384    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:56.766384    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:56.766384    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:56.766384    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:56.771982    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:47:56.771982    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:56.772048    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:56.772048    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:56.772048    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:56.772048    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:56 GMT
	I0429 12:47:56.772048    3296 round_trippers.go:580]     Audit-Id: 6d0a1606-3048-4a88-af62-130b8e76e2dc
	I0429 12:47:56.772048    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:56.772048    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:56.772252    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:56.772493    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:47:57.259348    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:57.259348    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:57.259348    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:57.259348    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:57.264741    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:47:57.264741    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:57.264741    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:57.264741    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:57 GMT
	I0429 12:47:57.264741    3296 round_trippers.go:580]     Audit-Id: bfaae552-1eaa-47ae-94ea-9f0308003f82
	I0429 12:47:57.264741    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:57.264741    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:57.264741    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:57.264741    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:57.264741    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:57.764591    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:57.764797    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:57.764797    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:57.764867    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:57.768479    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:47:57.768918    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:57.768918    3296 round_trippers.go:580]     Audit-Id: 9bb134cf-d970-4ec1-9255-e635219f5243
	I0429 12:47:57.768918    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:57.768918    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:57.768918    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:57.768993    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:57.769024    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:57.769024    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:57 GMT
	I0429 12:47:57.769164    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:58.273211    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:58.273211    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:58.273211    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:58.273211    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:58.277807    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:58.277807    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:58.277807    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:58.277807    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:58.277807    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:58 GMT
	I0429 12:47:58.277807    3296 round_trippers.go:580]     Audit-Id: d8b2db52-e22e-4852-953f-768d65a1f21e
	I0429 12:47:58.277807    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:58.277807    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:58.277807    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:58.277807    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:58.764134    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:58.764209    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:58.764230    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:58.764267    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:58.768270    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:58.768270    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:58.768270    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:58.768270    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:58.768355    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:58.768355    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:58 GMT
	I0429 12:47:58.768355    3296 round_trippers.go:580]     Audit-Id: cbbabbc5-59cb-4870-bd7e-70382a66be88
	I0429 12:47:58.768355    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:58.768355    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:58.768543    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:59.272633    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:59.272633    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:59.272633    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:59.272633    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:59.277493    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:59.277493    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:59.277493    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:59 GMT
	I0429 12:47:59.277493    3296 round_trippers.go:580]     Audit-Id: bca1926c-6392-44d2-a4cc-d4cbbd6f6a9a
	I0429 12:47:59.277493    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:59.277493    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:59.277493    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:59.277493    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:59.277493    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:59.277691    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:59.277691    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:47:59.763504    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:59.763504    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:59.763504    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:59.763504    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:59.769826    3296 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:47:59.769826    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:59.769826    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:59.769826    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:59 GMT
	I0429 12:47:59.769826    3296 round_trippers.go:580]     Audit-Id: 955a0863-ccf6-47ac-a93f-f1d961e0cda3
	I0429 12:47:59.769826    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:59.769826    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:59.769826    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:59.769826    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:00.262517    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:00.262587    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:00.262587    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:00.262587    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:00.270447    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:48:00.270447    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:00.270447    3296 round_trippers.go:580]     Audit-Id: db443f2d-d881-4293-b30b-75cad07002c2
	I0429 12:48:00.270447    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:00.270447    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:00.270447    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:00.270447    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:00.270447    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:00 GMT
	I0429 12:48:00.270447    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:00.759504    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:00.759735    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:00.759735    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:00.759735    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:00.765102    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:48:00.765102    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:00.765398    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:00.765398    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:00.765398    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:00.765398    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:00 GMT
	I0429 12:48:00.765398    3296 round_trippers.go:580]     Audit-Id: 1805a419-555f-4cad-8ada-e15690b29346
	I0429 12:48:00.765398    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:00.765788    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:01.263481    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:01.263481    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:01.263481    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:01.263481    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:01.267090    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:01.267090    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:01.267090    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:01.267090    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:01.267848    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:01.267848    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:01 GMT
	I0429 12:48:01.267848    3296 round_trippers.go:580]     Audit-Id: c0826b49-0f93-45aa-be31-8840b0185ff5
	I0429 12:48:01.267848    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:01.268076    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:01.765912    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:01.765912    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:01.765912    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:01.765912    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:01.768972    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:01.768972    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:01.768972    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:01.768972    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:01.768972    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:01 GMT
	I0429 12:48:01.768972    3296 round_trippers.go:580]     Audit-Id: f3db533c-fdd9-4604-baed-603c4f98caa5
	I0429 12:48:01.768972    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:01.768972    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:01.769782    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:01.770459    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:48:02.259098    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:02.259098    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:02.259098    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:02.259098    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:02.264247    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:48:02.264310    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:02.264310    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:02 GMT
	I0429 12:48:02.264310    3296 round_trippers.go:580]     Audit-Id: 5aa046f0-9575-4a01-bb1c-bf41a8778174
	I0429 12:48:02.264310    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:02.264310    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:02.264310    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:02.264310    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:02.264464    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:02.766902    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:02.766902    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:02.766902    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:02.766902    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:02.770362    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:02.771226    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:02.771226    3296 round_trippers.go:580]     Audit-Id: 6cdbcd73-477a-4bd7-8865-15a410e6d91e
	I0429 12:48:02.771226    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:02.771226    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:02.771226    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:02.771226    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:02.771307    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:02 GMT
	I0429 12:48:02.771606    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:03.271334    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:03.271334    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:03.271334    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:03.271334    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:03.275931    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:03.275931    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:03.275931    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:03.275931    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:03 GMT
	I0429 12:48:03.275931    3296 round_trippers.go:580]     Audit-Id: 2db591d0-97e2-4d0b-8d4f-60a045b4b473
	I0429 12:48:03.275931    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:03.275931    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:03.275931    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:03.275931    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:03.760823    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:03.760887    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:03.760952    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:03.760952    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:03.764565    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:03.764719    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:03.764719    3296 round_trippers.go:580]     Audit-Id: 4bd352bf-9473-479a-82ac-386bf52f710b
	I0429 12:48:03.764719    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:03.764719    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:03.764719    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:03.764719    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:03.764719    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:03 GMT
	I0429 12:48:03.764892    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:04.271987    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:04.271987    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:04.271987    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:04.271987    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:04.275972    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:04.276630    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:04.276630    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:04.276630    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:04.276630    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:04.276630    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:04 GMT
	I0429 12:48:04.276630    3296 round_trippers.go:580]     Audit-Id: ba8439d6-b081-4ef6-98d0-d3df255318f8
	I0429 12:48:04.276630    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:04.276914    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:04.277199    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:48:04.765191    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:04.765263    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:04.765263    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:04.765294    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:04.768620    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:04.768620    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:04.769281    3296 round_trippers.go:580]     Audit-Id: cb134310-a03b-4069-a517-f799ccab4010
	I0429 12:48:04.769281    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:04.769281    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:04.769281    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:04.769281    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:04.769281    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:04 GMT
	I0429 12:48:04.769572    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:05.259044    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:05.259044    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:05.259044    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:05.259044    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:05.261814    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:48:05.261814    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:05.261814    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:05 GMT
	I0429 12:48:05.262710    3296 round_trippers.go:580]     Audit-Id: 04148dfe-eeec-48a6-9915-4c5b416cd3d4
	I0429 12:48:05.262710    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:05.262710    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:05.262710    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:05.262838    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:05.262856    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:05.761517    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:05.761608    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:05.761608    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:05.761608    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:05.765654    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:05.765930    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:05.765930    3296 round_trippers.go:580]     Audit-Id: 8cbdf49f-3b0c-4a29-ab98-997512edc7f9
	I0429 12:48:05.765930    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:05.765930    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:05.765930    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:05.765930    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:05.765930    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:05 GMT
	I0429 12:48:05.766810    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:06.265026    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:06.265026    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:06.265026    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:06.265153    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:06.269016    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:06.269016    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:06.269016    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:06.269016    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:06.269016    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:06.269016    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:06 GMT
	I0429 12:48:06.269016    3296 round_trippers.go:580]     Audit-Id: 282f730b-2e0e-4652-8968-b1ba746e4a29
	I0429 12:48:06.269016    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:06.269586    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:06.759190    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:06.759284    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:06.759284    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:06.759284    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:06.765962    3296 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:48:06.765962    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:06.765962    3296 round_trippers.go:580]     Audit-Id: f392bda5-7aad-41cf-85f9-7274c03e30e1
	I0429 12:48:06.765962    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:06.765962    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:06.765962    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:06.765962    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:06.765962    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:06 GMT
	I0429 12:48:06.765962    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:06.766988    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:48:07.265922    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:07.265922    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:07.265922    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:07.265922    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:07.272554    3296 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:48:07.272554    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:07.272554    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:07.272554    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:07.272554    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:07.272554    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:07 GMT
	I0429 12:48:07.272554    3296 round_trippers.go:580]     Audit-Id: 44a4d7ef-d03a-425c-a66f-060a35d40b90
	I0429 12:48:07.272554    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:07.273368    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:07.766378    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:07.766378    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:07.766459    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:07.766459    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:07.770741    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:07.770741    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:07.770741    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:07.771456    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:07.771456    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:07.771456    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:07.771456    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:07 GMT
	I0429 12:48:07.771456    3296 round_trippers.go:580]     Audit-Id: 939a3f92-4ee9-4114-8d4b-26ebd919b43f
	I0429 12:48:07.771877    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:08.271681    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:08.271681    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:08.271751    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:08.271751    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:08.275134    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:08.275134    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:08.275134    3296 round_trippers.go:580]     Audit-Id: 57b0b765-1aa9-4fb0-a7b7-39a603e784f8
	I0429 12:48:08.275134    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:08.275581    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:08.275581    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:08.275581    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:08.275581    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:08 GMT
	I0429 12:48:08.276135    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:08.761756    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:08.761756    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:08.761756    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:08.761840    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:08.766237    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:08.766553    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:08.766553    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:08.766553    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:08.766553    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:08 GMT
	I0429 12:48:08.766553    3296 round_trippers.go:580]     Audit-Id: 1238846c-05bd-4bab-be3c-2d0d495523e1
	I0429 12:48:08.766553    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:08.766553    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:08.767232    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:08.767779    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:48:09.273783    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:09.273783    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:09.273853    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:09.273853    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:09.277281    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:09.277281    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:09.277281    3296 round_trippers.go:580]     Audit-Id: f9e0f9ee-a658-41fd-b611-988a7b5e6905
	I0429 12:48:09.277281    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:09.277281    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:09.277281    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:09.277281    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:09.277281    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:09 GMT
	I0429 12:48:09.277281    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:09.773435    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:09.773435    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:09.773435    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:09.773435    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:09.778077    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:09.778077    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:09.778077    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:09.778077    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:09 GMT
	I0429 12:48:09.778077    3296 round_trippers.go:580]     Audit-Id: 13dfc8fa-d709-460a-83d9-be31b8d38a40
	I0429 12:48:09.778077    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:09.778077    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:09.778077    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:09.778077    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:10.273817    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:10.273873    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:10.273873    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:10.273873    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:10.278269    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:10.278269    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:10.278269    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:10.278269    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:10 GMT
	I0429 12:48:10.278269    3296 round_trippers.go:580]     Audit-Id: be6af11e-f775-49f7-976d-bccb19209c49
	I0429 12:48:10.278269    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:10.278867    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:10.278867    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:10.278994    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:10.770369    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:10.770452    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:10.770452    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:10.770452    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:10.774602    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:10.774602    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:10.774602    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:10.774602    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:10.774602    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:10 GMT
	I0429 12:48:10.775034    3296 round_trippers.go:580]     Audit-Id: d720d994-848d-4b2c-aef0-bf666190289f
	I0429 12:48:10.775034    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:10.775034    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:10.775097    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:10.775887    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:48:11.272722    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:11.272849    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:11.272849    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:11.272849    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:11.281710    3296 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 12:48:11.281710    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:11.281710    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:11.281710    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:11 GMT
	I0429 12:48:11.281710    3296 round_trippers.go:580]     Audit-Id: b03a569f-9b3c-4867-82ca-4ad703c59ff4
	I0429 12:48:11.281710    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:11.281710    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:11.281710    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:11.282825    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:11.771706    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:11.771747    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:11.771747    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:11.771747    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:11.776773    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:11.776773    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:11.776773    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:11.776773    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:11.776773    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:11.776773    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:11 GMT
	I0429 12:48:11.776773    3296 round_trippers.go:580]     Audit-Id: dff77a36-365e-4591-8f7d-06a6258a1e54
	I0429 12:48:11.776773    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:11.776773    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:12.272009    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:12.272085    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.272085    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.272085    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.276159    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:12.276159    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.276159    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.276159    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.276159    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.276159    3296 round_trippers.go:580]     Audit-Id: e0373aa0-d69a-4307-a088-fc917be35e5d
	I0429 12:48:12.276159    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.276159    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.277387    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:12.771145    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:12.771145    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.771145    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.771145    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.775119    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:12.775525    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.775525    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.775525    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.775525    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.775525    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.775525    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.775525    3296 round_trippers.go:580]     Audit-Id: 50b37234-dedd-41e6-9046-584be76d0e79
	I0429 12:48:12.775715    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"625","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0429 12:48:12.776327    3296 node_ready.go:49] node "multinode-409200-m02" has status "Ready":"True"
	I0429 12:48:12.776327    3296 node_ready.go:38] duration metric: took 22.5176475s for node "multinode-409200-m02" to be "Ready" ...
	I0429 12:48:12.776327    3296 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:48:12.776469    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods
	I0429 12:48:12.776469    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.776469    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.776469    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.784492    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:48:12.784492    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.784492    3296 round_trippers.go:580]     Audit-Id: 94d646b8-05cf-4d03-9b1b-3ef27e586afb
	I0429 12:48:12.784492    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.784492    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.784492    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.784492    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.784492    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.785404    3296 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"625"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"418","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70486 chars]
	I0429 12:48:12.790223    3296 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.790579    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 12:48:12.790648    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.790648    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.790648    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.793956    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:12.793956    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.793956    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.793956    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.793956    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.793956    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.793956    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.793956    3296 round_trippers.go:580]     Audit-Id: 388d1630-fe39-4ef2-8fb2-aad991435d61
	I0429 12:48:12.794511    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"418","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0429 12:48:12.795121    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:12.795121    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.795187    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.795187    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.798037    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:48:12.798162    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.798162    3296 round_trippers.go:580]     Audit-Id: cd7aa80d-9409-4761-937d-9eeb24a8d1ee
	I0429 12:48:12.798162    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.798162    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.798162    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.798162    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.798162    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.798351    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:48:12.798576    3296 pod_ready.go:92] pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:12.798576    3296 pod_ready.go:81] duration metric: took 8.2932ms for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.798576    3296 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.798576    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-409200
	I0429 12:48:12.798576    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.798576    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.798576    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.801224    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:48:12.802225    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.802225    3296 round_trippers.go:580]     Audit-Id: c10e3746-87e3-4f93-991b-058201592f85
	I0429 12:48:12.802225    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.802225    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.802225    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.802308    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.802308    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.802566    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-409200","namespace":"kube-system","uid":"d181e36d-2901-4660-a441-6f6b5f3d6c5f","resourceVersion":"381","creationTimestamp":"2024-04-29T12:44:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.185.116:2379","kubernetes.io/config.hash":"c66d644ea477a94b97c6ebe1092303ff","kubernetes.io/config.mirror":"c66d644ea477a94b97c6ebe1092303ff","kubernetes.io/config.seen":"2024-04-29T12:44:32.885743739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0429 12:48:12.803193    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:12.803254    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.803254    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.803254    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.806226    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:48:12.806226    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.806226    3296 round_trippers.go:580]     Audit-Id: a70fb09e-0dd5-4a3b-8869-47ac06f9e5bd
	I0429 12:48:12.806226    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.806226    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.806226    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.806226    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.806226    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.806226    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:48:12.806226    3296 pod_ready.go:92] pod "etcd-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:12.807257    3296 pod_ready.go:81] duration metric: took 8.6815ms for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.807257    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.807257    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-409200
	I0429 12:48:12.807257    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.807257    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.807257    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.821225    3296 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 12:48:12.821952    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.821952    3296 round_trippers.go:580]     Audit-Id: 4eb177f8-3f79-41c6-8259-f2bfc89fb2c9
	I0429 12:48:12.821952    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.821952    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.821952    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.821952    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.821952    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.822386    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-409200","namespace":"kube-system","uid":"da427161-547d-4e8d-a545-8b243ce10f12","resourceVersion":"380","creationTimestamp":"2024-04-29T12:44:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.185.116:8443","kubernetes.io/config.hash":"fab3ac6a5694131422285e941b90103f","kubernetes.io/config.mirror":"fab3ac6a5694131422285e941b90103f","kubernetes.io/config.seen":"2024-04-29T12:44:24.392874586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0429 12:48:12.822632    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:12.822632    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.822632    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.822632    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.825968    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:12.826380    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.826380    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.826380    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.826380    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.826380    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.826380    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.826380    3296 round_trippers.go:580]     Audit-Id: e83ce058-61e8-48a6-afb7-50c47b79607d
	I0429 12:48:12.826524    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:48:12.826700    3296 pod_ready.go:92] pod "kube-apiserver-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:12.827026    3296 pod_ready.go:81] duration metric: took 19.4424ms for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.827026    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.827147    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-409200
	I0429 12:48:12.827147    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.827147    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.827147    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.831978    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:12.831978    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.832086    3296 round_trippers.go:580]     Audit-Id: 083e5fff-f7d4-4f9e-be22-edaff55517dc
	I0429 12:48:12.832086    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.832086    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.832086    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.832086    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.832086    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.832503    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-409200","namespace":"kube-system","uid":"bc75101f-63f2-4b41-a912-4d015c4fd4aa","resourceVersion":"382","creationTimestamp":"2024-04-29T12:44:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.mirror":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.seen":"2024-04-29T12:44:32.885750739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0429 12:48:12.833774    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:12.833774    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.833774    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.833774    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.836798    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:12.836798    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.836798    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.836798    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.836798    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.836798    3296 round_trippers.go:580]     Audit-Id: e04f6a75-1171-438a-96f3-dddbe508dc2a
	I0429 12:48:12.836798    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.836798    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.836798    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:48:12.836798    3296 pod_ready.go:92] pod "kube-controller-manager-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:12.836798    3296 pod_ready.go:81] duration metric: took 9.7721ms for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.836798    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.974646    3296 request.go:629] Waited for 136.4057ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 12:48:12.974716    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 12:48:12.974716    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.974716    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.974793    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.979279    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:12.979279    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.979279    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.979279    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.979279    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.979279    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.979279    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.979279    3296 round_trippers.go:580]     Audit-Id: c2f9af39-ab8b-40b6-a94a-dec9b2e14de3
	I0429 12:48:12.979829    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g2jp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"d2c926f8-0701-483c-84ae-295e7bb08fc9","resourceVersion":"375","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0429 12:48:13.176469    3296 request.go:629] Waited for 195.6901ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:13.176469    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:13.176734    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:13.176734    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:13.176734    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:13.184321    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:48:13.184321    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:13.184321    3296 round_trippers.go:580]     Audit-Id: eb782e6b-e0d4-4880-a25e-059332928fe3
	I0429 12:48:13.184321    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:13.184321    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:13.184321    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:13.184321    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:13.184321    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:13 GMT
	I0429 12:48:13.184321    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:48:13.185604    3296 pod_ready.go:92] pod "kube-proxy-g2jp8" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:13.185604    3296 pod_ready.go:81] duration metric: took 348.8036ms for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:13.185604    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lwc65" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:13.380150    3296 request.go:629] Waited for 194.3872ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwc65
	I0429 12:48:13.380234    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwc65
	I0429 12:48:13.380438    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:13.380438    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:13.380503    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:13.384246    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:13.384246    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:13.385054    3296 round_trippers.go:580]     Audit-Id: 16902323-5317-49dc-a050-1c05fbf2447d
	I0429 12:48:13.385054    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:13.385054    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:13.385054    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:13.385054    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:13.385054    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:13 GMT
	I0429 12:48:13.385189    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lwc65","generateName":"kube-proxy-","namespace":"kube-system","uid":"98e18062-2d8f-45d3-a8fa-dda098365db8","resourceVersion":"606","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0429 12:48:13.584220    3296 request.go:629] Waited for 197.0057ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:13.584358    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:13.584358    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:13.584358    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:13.584358    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:13.588371    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:13.588371    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:13.588371    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:13.588371    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:13 GMT
	I0429 12:48:13.588371    3296 round_trippers.go:580]     Audit-Id: aad7e695-0358-4fac-97a0-89102aa3e85c
	I0429 12:48:13.588371    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:13.588371    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:13.588371    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:13.589260    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"625","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0429 12:48:13.589537    3296 pod_ready.go:92] pod "kube-proxy-lwc65" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:13.589537    3296 pod_ready.go:81] duration metric: took 403.9301ms for pod "kube-proxy-lwc65" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:13.589537    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:13.786079    3296 request.go:629] Waited for 196.2715ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 12:48:13.786079    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 12:48:13.786079    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:13.786383    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:13.786383    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:13.790876    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:13.790876    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:13.790876    3296 round_trippers.go:580]     Audit-Id: 7a61fcbd-566e-4344-b176-faf124521ad5
	I0429 12:48:13.790876    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:13.790876    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:13.790876    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:13.790876    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:13.790876    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:13 GMT
	I0429 12:48:13.791284    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-409200","namespace":"kube-system","uid":"6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266","resourceVersion":"379","creationTimestamp":"2024-04-29T12:44:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.mirror":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.seen":"2024-04-29T12:44:24.392867685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0429 12:48:13.974364    3296 request.go:629] Waited for 182.6101ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:13.974515    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:13.974651    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:13.974896    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:13.974896    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:13.977839    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:48:13.978533    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:13.978533    3296 round_trippers.go:580]     Audit-Id: 10150d3a-18fb-49e6-b280-e98bbb3d444b
	I0429 12:48:13.978533    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:13.978533    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:13.978607    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:13.978607    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:13.978607    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:13 GMT
	I0429 12:48:13.978855    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:48:13.979415    3296 pod_ready.go:92] pod "kube-scheduler-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:13.979415    3296 pod_ready.go:81] duration metric: took 389.8741ms for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:13.979500    3296 pod_ready.go:38] duration metric: took 1.2030784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:48:13.979500    3296 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 12:48:13.992716    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:48:14.019253    3296 system_svc.go:56] duration metric: took 39.7527ms WaitForService to wait for kubelet
	I0429 12:48:14.019320    3296 kubeadm.go:576] duration metric: took 24.0450452s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:48:14.019320    3296 node_conditions.go:102] verifying NodePressure condition ...
	I0429 12:48:14.177527    3296 request.go:629] Waited for 158.0768ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/nodes
	I0429 12:48:14.177815    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes
	I0429 12:48:14.177815    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:14.177815    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:14.177815    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:14.181881    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:14.181881    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:14.182639    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:14.182639    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:14.182639    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:14 GMT
	I0429 12:48:14.182639    3296 round_trippers.go:580]     Audit-Id: aaa1c9b4-e781-4a89-9137-b98b7184a74c
	I0429 12:48:14.182639    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:14.182747    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:14.182822    3296 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"626"},"items":[{"metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9269 chars]
	I0429 12:48:14.183880    3296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:48:14.183880    3296 node_conditions.go:123] node cpu capacity is 2
	I0429 12:48:14.183880    3296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:48:14.183880    3296 node_conditions.go:123] node cpu capacity is 2
	I0429 12:48:14.183880    3296 node_conditions.go:105] duration metric: took 164.5584ms to run NodePressure ...
	I0429 12:48:14.183880    3296 start.go:240] waiting for startup goroutines ...
	I0429 12:48:14.183880    3296 start.go:254] writing updated cluster config ...
	I0429 12:48:14.198239    3296 ssh_runner.go:195] Run: rm -f paused
	I0429 12:48:14.346996    3296 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 12:48:14.350122    3296 out.go:177] * Done! kubectl is now configured to use "multinode-409200" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.206463668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.217366912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.218316116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.218385716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.218687418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:45:02 multinode-409200 cri-dockerd[1227]: time="2024-04-29T12:45:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea71df7098870120a2d4896da35fd5a83ed362c3d7a02fabd52cfd120dbaa40f/resolv.conf as [nameserver 172.26.176.1]"
	Apr 29 12:45:02 multinode-409200 cri-dockerd[1227]: time="2024-04-29T12:45:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ba73c7e4d62c254f26c80096c2c5f7821593464788b17b6707d2cb7cad969e8d/resolv.conf as [nameserver 172.26.176.1]"
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.635457316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.636083617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.636125217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.636297718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.736418780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.736676980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.736820981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.738556985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:48:39 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:39.861076906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 12:48:39 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:39.861156406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 12:48:39 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:39.861205506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:48:39 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:39.861322806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:48:40 multinode-409200 cri-dockerd[1227]: time="2024-04-29T12:48:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3a063be2c6a2b3661cf9646e44862baf96718fcd26549482289dd884d3e11b6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 12:48:41 multinode-409200 cri-dockerd[1227]: time="2024-04-29T12:48:41Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 12:48:41 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:41.443570060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 12:48:41 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:41.443726962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 12:48:41 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:41.444350768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:48:41 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:41.444618971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9a3d650be06c0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   49 seconds ago      Running             busybox                   0                   d3a063be2c6a2       busybox-fc5497c4f-gr44t
	98ab9c7d68851       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   ba73c7e4d62c2       coredns-7db6d8ff4d-ctb8n
	5a03c0724371b       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   ea71df7098870       storage-provisioner
	caeb8f4bcea15       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   3792c8bbb983d       kindnet-xj48j
	3ba8caba4bc56       a0bf559e280cf                                                                                         4 minutes ago       Running             kube-proxy                0                   2d26cd85561dd       kube-proxy-g2jp8
	315326a1ce10c       259c8277fcbbc                                                                                         5 minutes ago       Running             kube-scheduler            0                   c88537851c019       kube-scheduler-multinode-409200
	390664a859132       c42f13656d0b2                                                                                         5 minutes ago       Running             kube-apiserver            0                   85aab37150a11       kube-apiserver-multinode-409200
	5adb6a9084e4b       c7aad43836fa5                                                                                         5 minutes ago       Running             kube-controller-manager   0                   19fd9c3dddd43       kube-controller-manager-multinode-409200
	030b6d42f50f9       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      0                   5d39391ba43b6       etcd-multinode-409200
	
	
	==> coredns [98ab9c7d6885] <==
	[INFO] 10.244.0.3:49783 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000199102s
	[INFO] 10.244.1.2:51801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218002s
	[INFO] 10.244.1.2:45305 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000112002s
	[INFO] 10.244.1.2:41116 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177102s
	[INFO] 10.244.1.2:57979 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158402s
	[INFO] 10.244.1.2:49615 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000059801s
	[INFO] 10.244.1.2:42034 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000564s
	[INFO] 10.244.1.2:59112 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133602s
	[INFO] 10.244.1.2:44817 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055401s
	[INFO] 10.244.0.3:47750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000202902s
	[INFO] 10.244.0.3:42610 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058701s
	[INFO] 10.244.0.3:48140 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094301s
	[INFO] 10.244.0.3:43769 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056701s
	[INFO] 10.244.1.2:35529 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000365104s
	[INFO] 10.244.1.2:35716 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000176402s
	[INFO] 10.244.1.2:54486 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129601s
	[INFO] 10.244.1.2:44351 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000646s
	[INFO] 10.244.0.3:53572 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267303s
	[INFO] 10.244.0.3:60447 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147901s
	[INFO] 10.244.0.3:49757 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147202s
	[INFO] 10.244.0.3:51305 - 5 "PTR IN 1.176.26.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081501s
	[INFO] 10.244.1.2:52861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175302s
	[INFO] 10.244.1.2:45137 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199102s
	[INFO] 10.244.1.2:32823 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000190002s
	[INFO] 10.244.1.2:41704 - 5 "PTR IN 1.176.26.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000061001s
	
	
	==> describe nodes <==
	Name:               multinode-409200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-409200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=multinode-409200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T12_44_34_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-409200
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:49:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:49:09 +0000   Mon, 29 Apr 2024 12:44:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:49:09 +0000   Mon, 29 Apr 2024 12:44:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:49:09 +0000   Mon, 29 Apr 2024 12:44:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:49:09 +0000   Mon, 29 Apr 2024 12:45:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.185.116
	  Hostname:    multinode-409200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d5308ef48a604eec8cefa00b64c99d59
	  System UUID:                560251d1-f442-3048-aa69-bfa1c5b44db2
	  Boot ID:                    c750a879-a407-4348-b519-0853c8e57aab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gr44t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 coredns-7db6d8ff4d-ctb8n                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m43s
	  kube-system                 etcd-multinode-409200                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m57s
	  kube-system                 kindnet-xj48j                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m43s
	  kube-system                 kube-apiserver-multinode-409200             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-controller-manager-multinode-409200    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-proxy-g2jp8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-scheduler-multinode-409200             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m41s  kube-proxy       
	  Normal  Starting                 4m58s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m57s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m57s  kubelet          Node multinode-409200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s  kubelet          Node multinode-409200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s  kubelet          Node multinode-409200 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m44s  node-controller  Node multinode-409200 event: Registered Node multinode-409200 in Controller
	  Normal  NodeReady                4m29s  kubelet          Node multinode-409200 status is now: NodeReady
	
	
	Name:               multinode-409200-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-409200-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=multinode-409200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_47_49_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:47:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-409200-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:49:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:48:50 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:48:50 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:48:50 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:48:50 +0000   Mon, 29 Apr 2024 12:48:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.183.208
	  Hostname:    multinode-409200-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d58c45a85c440c597f0a96b30e84f09
	  System UUID:                8c823ba6-3970-cc46-8a8d-d45bb5bace8c
	  Boot ID:                    40b5e515-11a3-4198-b85e-669d356ae177
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xvm2v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kindnet-svw9w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      101s
	  kube-system                 kube-proxy-lwc65           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  101s (x2 over 101s)  kubelet          Node multinode-409200-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x2 over 101s)  kubelet          Node multinode-409200-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x2 over 101s)  kubelet          Node multinode-409200-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                  node-controller  Node multinode-409200-m02 event: Registered Node multinode-409200-m02 in Controller
	  Normal  NodeReady                78s                  kubelet          Node multinode-409200-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.197340] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 12:43] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.192639] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +31.320327] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.121697] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.579483] systemd-fstab-generator[984]: Ignoring "noauto" option for root device
	[  +0.194821] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[  +0.242876] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[Apr29 12:44] systemd-fstab-generator[1180]: Ignoring "noauto" option for root device
	[  +0.202815] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[  +0.211261] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.302320] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[ +11.768479] systemd-fstab-generator[1313]: Ignoring "noauto" option for root device
	[  +0.123744] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.764600] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	[  +6.490625] systemd-fstab-generator[1707]: Ignoring "noauto" option for root device
	[  +0.131334] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.059343] systemd-fstab-generator[2119]: Ignoring "noauto" option for root device
	[  +0.134282] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.603784] systemd-fstab-generator[2313]: Ignoring "noauto" option for root device
	[  +0.252752] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.930863] kauditd_printk_skb: 51 callbacks suppressed
	[Apr29 12:48] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [030b6d42f50f] <==
	{"level":"info","ts":"2024-04-29T12:44:26.501859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cab0b820a65a62da elected leader cab0b820a65a62da at term 2"}
	{"level":"info","ts":"2024-04-29T12:44:26.510101Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"cab0b820a65a62da","local-member-attributes":"{Name:multinode-409200 ClientURLs:[https://172.26.185.116:2379]}","request-path":"/0/members/cab0b820a65a62da/attributes","cluster-id":"7be84cdbccca5422","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T12:44:26.510456Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T12:44:26.510629Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T12:44:26.511168Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T12:44:26.522225Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T12:44:26.538289Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T12:44:26.537327Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.26.185.116:2379"}
	{"level":"info","ts":"2024-04-29T12:44:26.538381Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7be84cdbccca5422","local-member-id":"cab0b820a65a62da","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T12:44:26.542268Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T12:44:26.542315Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T12:44:26.545699Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-04-29T12:44:55.120748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.527402ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-409200\" ","response":"range_response_count:1 size:4487"}
	{"level":"info","ts":"2024-04-29T12:44:55.121025Z","caller":"traceutil/trace.go:171","msg":"trace[404766316] range","detail":"{range_begin:/registry/minions/multinode-409200; range_end:; response_count:1; response_revision:384; }","duration":"126.867204ms","start":"2024-04-29T12:44:54.994138Z","end":"2024-04-29T12:44:55.121005Z","steps":["trace[404766316] 'range keys from in-memory index tree'  (duration: 126.437502ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T12:44:55.121363Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.778786ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T12:44:55.121844Z","caller":"traceutil/trace.go:171","msg":"trace[682102743] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:384; }","duration":"109.124789ms","start":"2024-04-29T12:44:55.012557Z","end":"2024-04-29T12:44:55.121681Z","steps":["trace[682102743] 'range keys from in-memory index tree'  (duration: 108.513185ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T12:45:15.210156Z","caller":"traceutil/trace.go:171","msg":"trace[1610722120] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"103.234958ms","start":"2024-04-29T12:45:15.10687Z","end":"2024-04-29T12:45:15.210105Z","steps":["trace[1610722120] 'process raft request'  (duration: 102.984959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T12:47:42.373144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.573798ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7123163170697621931 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.26.185.116\" mod_revision:540 > success:<request_put:<key:\"/registry/masterleases/172.26.185.116\" value_size:67 lease:7123163170697621929 >> failure:<request_range:<key:\"/registry/masterleases/172.26.185.116\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T12:47:42.373595Z","caller":"traceutil/trace.go:171","msg":"trace[79596807] transaction","detail":"{read_only:false; response_revision:548; number_of_response:1; }","duration":"450.429089ms","start":"2024-04-29T12:47:41.923027Z","end":"2024-04-29T12:47:42.373456Z","steps":["trace[79596807] 'process raft request'  (duration: 195.029792ms)","trace[79596807] 'compare'  (duration: 254.319098ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T12:47:42.373883Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T12:47:41.923009Z","time spent":"450.744888ms","remote":"127.0.0.1:40784","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.26.185.116\" mod_revision:540 > success:<request_put:<key:\"/registry/masterleases/172.26.185.116\" value_size:67 lease:7123163170697621929 >> failure:<request_range:<key:\"/registry/masterleases/172.26.185.116\" > >"}
	{"level":"info","ts":"2024-04-29T12:47:43.008534Z","caller":"traceutil/trace.go:171","msg":"trace[733695479] linearizableReadLoop","detail":"{readStateIndex:600; appliedIndex:599; }","duration":"185.820907ms","start":"2024-04-29T12:47:42.822694Z","end":"2024-04-29T12:47:43.008515Z","steps":["trace[733695479] 'read index received'  (duration: 185.612707ms)","trace[733695479] 'applied index is now lower than readState.Index'  (duration: 207.7µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T12:47:43.008639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.918907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T12:47:43.008664Z","caller":"traceutil/trace.go:171","msg":"trace[1012318573] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:549; }","duration":"185.991407ms","start":"2024-04-29T12:47:42.822665Z","end":"2024-04-29T12:47:43.008657Z","steps":["trace[1012318573] 'agreement among raft nodes before linearized reading'  (duration: 185.924207ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T12:47:43.008865Z","caller":"traceutil/trace.go:171","msg":"trace[263565608] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"344.544256ms","start":"2024-04-29T12:47:42.66421Z","end":"2024-04-29T12:47:43.008754Z","steps":["trace[263565608] 'process raft request'  (duration: 344.116756ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T12:47:43.008953Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T12:47:42.66419Z","time spent":"344.713155ms","remote":"127.0.0.1:40904","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:547 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 12:49:30 up 7 min,  0 users,  load average: 0.28, 0.31, 0.17
	Linux multinode-409200 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [caeb8f4bcea1] <==
	I0429 12:48:26.714097       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 12:48:36.727092       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 12:48:36.727200       1 main.go:227] handling current node
	I0429 12:48:36.727223       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 12:48:36.727240       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 12:48:46.739086       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 12:48:46.739289       1 main.go:227] handling current node
	I0429 12:48:46.739390       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 12:48:46.739760       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 12:48:56.746640       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 12:48:56.746804       1 main.go:227] handling current node
	I0429 12:48:56.746820       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 12:48:56.746829       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 12:49:06.759597       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 12:49:06.759697       1 main.go:227] handling current node
	I0429 12:49:06.759712       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 12:49:06.759720       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 12:49:16.772039       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 12:49:16.772139       1 main.go:227] handling current node
	I0429 12:49:16.772154       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 12:49:16.772162       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 12:49:26.779155       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 12:49:26.779266       1 main.go:227] handling current node
	I0429 12:49:26.779284       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 12:49:26.779293       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [390664a85913] <==
	I0429 12:44:31.578677       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 12:44:31.768997       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 12:44:31.785872       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.26.185.116]
	I0429 12:44:31.787316       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 12:44:31.796178       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 12:44:32.325302       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 12:44:32.866487       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 12:44:32.926171       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 12:44:32.964615       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 12:44:46.825589       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 12:44:47.230258       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0429 12:47:42.375122       1 trace.go:236] Trace[1523062445]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.26.185.116,type:*v1.Endpoints,resource:apiServerIPInfo (29-Apr-2024 12:47:41.760) (total time: 614ms):
	Trace[1523062445]: ---"Transaction prepared" 158ms (12:47:41.920)
	Trace[1523062445]: ---"Txn call completed" 454ms (12:47:42.375)
	Trace[1523062445]: [614.898429ms] [614.898429ms] END
	E0429 12:48:44.534701       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58575: use of closed network connection
	E0429 12:48:45.098245       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58577: use of closed network connection
	E0429 12:48:45.746138       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58579: use of closed network connection
	E0429 12:48:46.297580       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58581: use of closed network connection
	E0429 12:48:46.844349       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58583: use of closed network connection
	E0429 12:48:47.384985       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58585: use of closed network connection
	E0429 12:48:48.418000       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58588: use of closed network connection
	E0429 12:48:58.947143       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58590: use of closed network connection
	E0429 12:48:59.495039       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58593: use of closed network connection
	E0429 12:49:10.043335       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58595: use of closed network connection
	
	
	==> kube-controller-manager [5adb6a9084e4] <==
	I0429 12:44:46.868218       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 12:44:46.893362       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 12:44:47.638388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="799.744665ms"
	I0429 12:44:47.726142       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.635626ms"
	I0429 12:44:47.726325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.401µs"
	I0429 12:44:48.192114       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="102.841519ms"
	I0429 12:44:48.225494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.476922ms"
	I0429 12:44:48.261461       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.901256ms"
	I0429 12:44:48.261977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="350.603µs"
	I0429 12:45:01.593292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.901µs"
	I0429 12:45:01.625573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="248.901µs"
	I0429 12:45:03.575482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.075381ms"
	I0429 12:45:03.577737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.2µs"
	I0429 12:45:06.222594       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 12:47:49.237379       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-409200-m02\" does not exist"
	I0429 12:47:49.263216       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-409200-m02" podCIDRs=["10.244.1.0/24"]
	I0429 12:47:51.255160       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-409200-m02"
	I0429 12:48:12.497091       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-409200-m02"
	I0429 12:48:39.315624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.709457ms"
	I0429 12:48:39.348543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.825151ms"
	I0429 12:48:39.350006       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="182.599µs"
	I0429 12:48:41.641664       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.408001ms"
	I0429 12:48:41.641949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.401µs"
	I0429 12:48:41.676091       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.426762ms"
	I0429 12:48:41.676205       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.201µs"
	
	
	==> kube-proxy [3ba8caba4bc5] <==
	I0429 12:44:49.113215       1 server_linux.go:69] "Using iptables proxy"
	I0429 12:44:49.178365       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.26.185.116"]
	I0429 12:44:49.235481       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 12:44:49.235656       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 12:44:49.235683       1 server_linux.go:165] "Using iptables Proxier"
	I0429 12:44:49.240257       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 12:44:49.243830       1 server.go:872] "Version info" version="v1.30.0"
	I0429 12:44:49.243910       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 12:44:49.247315       1 config.go:192] "Starting service config controller"
	I0429 12:44:49.248504       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 12:44:49.248691       1 config.go:101] "Starting endpoint slice config controller"
	I0429 12:44:49.248945       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 12:44:49.251257       1 config.go:319] "Starting node config controller"
	I0429 12:44:49.251298       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 12:44:49.349845       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 12:44:49.349850       1 shared_informer.go:320] Caches are synced for service config
	I0429 12:44:49.351890       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [315326a1ce10] <==
	W0429 12:44:30.427247       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 12:44:30.427377       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 12:44:30.447600       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 12:44:30.448660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 12:44:30.467546       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 12:44:30.467843       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 12:44:30.543006       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 12:44:30.543577       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 12:44:30.596529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 12:44:30.596652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 12:44:30.643354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 12:44:30.643664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 12:44:30.668341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 12:44:30.668936       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 12:44:30.756255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 12:44:30.756684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 12:44:30.842695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 12:44:30.842746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 12:44:30.878228       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 12:44:30.878284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 12:44:30.878602       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 12:44:30.878712       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 12:44:30.990384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 12:44:30.990868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0429 12:44:32.117111       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 12:45:02 multinode-409200 kubelet[2127]: I0429 12:45:02.473446    2127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea71df7098870120a2d4896da35fd5a83ed362c3d7a02fabd52cfd120dbaa40f"
	Apr 29 12:45:03 multinode-409200 kubelet[2127]: I0429 12:45:03.515687    2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=8.515667565 podStartE2EDuration="8.515667565s" podCreationTimestamp="2024-04-29 12:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 12:45:03.513905171 +0000 UTC m=+30.768054920" watchObservedRunningTime="2024-04-29 12:45:03.515667565 +0000 UTC m=+30.769817314"
	Apr 29 12:45:33 multinode-409200 kubelet[2127]: E0429 12:45:33.017013    2127 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:45:33 multinode-409200 kubelet[2127]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:45:33 multinode-409200 kubelet[2127]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:45:33 multinode-409200 kubelet[2127]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:45:33 multinode-409200 kubelet[2127]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:46:33 multinode-409200 kubelet[2127]: E0429 12:46:33.017005    2127 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:46:33 multinode-409200 kubelet[2127]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:46:33 multinode-409200 kubelet[2127]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:46:33 multinode-409200 kubelet[2127]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:46:33 multinode-409200 kubelet[2127]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:47:33 multinode-409200 kubelet[2127]: E0429 12:47:33.018452    2127 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:47:33 multinode-409200 kubelet[2127]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:47:33 multinode-409200 kubelet[2127]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:47:33 multinode-409200 kubelet[2127]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:47:33 multinode-409200 kubelet[2127]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:48:33 multinode-409200 kubelet[2127]: E0429 12:48:33.018266    2127 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 12:48:33 multinode-409200 kubelet[2127]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 12:48:33 multinode-409200 kubelet[2127]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 12:48:33 multinode-409200 kubelet[2127]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 12:48:33 multinode-409200 kubelet[2127]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 12:48:39 multinode-409200 kubelet[2127]: I0429 12:48:39.299274    2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ctb8n" podStartSLOduration=232.29925433 podStartE2EDuration="3m52.29925433s" podCreationTimestamp="2024-04-29 12:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 12:45:03.536527087 +0000 UTC m=+30.790676936" watchObservedRunningTime="2024-04-29 12:48:39.29925433 +0000 UTC m=+246.553404179"
	Apr 29 12:48:39 multinode-409200 kubelet[2127]: I0429 12:48:39.300424    2127 topology_manager.go:215] "Topology Admit Handler" podUID="0702453a-eae6-44a3-893d-10d040074461" podNamespace="default" podName="busybox-fc5497c4f-gr44t"
	Apr 29 12:48:39 multinode-409200 kubelet[2127]: I0429 12:48:39.417090    2127 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p48cj\" (UniqueName: \"kubernetes.io/projected/0702453a-eae6-44a3-893d-10d040074461-kube-api-access-p48cj\") pod \"busybox-fc5497c4f-gr44t\" (UID: \"0702453a-eae6-44a3-893d-10d040074461\") " pod="default/busybox-fc5497c4f-gr44t"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 12:49:22.452323    7232 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-409200 -n multinode-409200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-409200 -n multinode-409200: (12.2390735s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-409200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (57.80s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (286.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 node start m03 -v=7 --alsologtostderr
E0429 13:01:27.482734    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 13:02:24.782287    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-409200 node start m03 -v=7 --alsologtostderr: exit status 90 (2m55.7500581s)

                                                
                                                
-- stdout --
	* Starting "multinode-409200-m03" worker node in "multinode-409200" cluster
	* Restarting existing hyperv VM for "multinode-409200-m03" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 13:01:07.292487    3948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 13:01:07.380855    3948 out.go:291] Setting OutFile to fd 1056 ...
	I0429 13:01:07.397603    3948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:01:07.397718    3948 out.go:304] Setting ErrFile to fd 952...
	I0429 13:01:07.397718    3948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:01:07.415428    3948 mustload.go:65] Loading cluster: multinode-409200
	I0429 13:01:07.416400    3948 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:01:07.417427    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:01:09.544768    3948 main.go:141] libmachine: [stdout =====>] : Off
	
	I0429 13:01:09.544996    3948 main.go:141] libmachine: [stderr =====>] : 
	W0429 13:01:09.544996    3948 host.go:58] "multinode-409200-m03" host status: Stopped
	I0429 13:01:09.549030    3948 out.go:177] * Starting "multinode-409200-m03" worker node in "multinode-409200" cluster
	I0429 13:01:09.552958    3948 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 13:01:09.553083    3948 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 13:01:09.553083    3948 cache.go:56] Caching tarball of preloaded images
	I0429 13:01:09.553083    3948 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 13:01:09.553083    3948 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 13:01:09.554253    3948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 13:01:09.555503    3948 start.go:360] acquireMachinesLock for multinode-409200-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 13:01:09.555503    3948 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-409200-m03"
	I0429 13:01:09.556899    3948 start.go:96] Skipping create...Using existing machine configuration
	I0429 13:01:09.556899    3948 fix.go:54] fixHost starting: m03
	I0429 13:01:09.557205    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:01:11.695055    3948 main.go:141] libmachine: [stdout =====>] : Off
	
	I0429 13:01:11.695055    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:11.695762    3948 fix.go:112] recreateIfNeeded on multinode-409200-m03: state=Stopped err=<nil>
	W0429 13:01:11.695762    3948 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 13:01:11.699218    3948 out.go:177] * Restarting existing hyperv VM for "multinode-409200-m03" ...
	I0429 13:01:11.701694    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-409200-m03
	I0429 13:01:14.875553    3948 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:01:14.875553    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:14.876553    3948 main.go:141] libmachine: Waiting for host to start...
	I0429 13:01:14.876667    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:01:17.164801    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:01:17.165391    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:17.165391    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:01:19.849442    3948 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:01:19.849442    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:20.856427    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:01:23.050432    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:01:23.051429    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:23.051622    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:01:25.657373    3948 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:01:25.658317    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:26.668825    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:01:28.876557    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:01:28.876759    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:28.876870    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:01:31.417891    3948 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:01:31.418382    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:32.429874    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:01:34.662550    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:01:34.662550    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:34.662550    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:01:37.224548    3948 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:01:37.224591    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:38.239358    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:01:40.442937    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:01:40.443476    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:40.443476    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:01:43.095881    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:01:43.095881    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:43.099045    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:01:45.306331    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:01:45.306331    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:45.306331    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:01:47.929503    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:01:47.929503    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:47.930572    3948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 13:01:47.934215    3948 machine.go:94] provisionDockerMachine start ...
	I0429 13:01:47.934215    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:01:50.068043    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:01:50.068043    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:50.068339    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:01:52.699009    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:01:52.699009    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:52.705059    3948 main.go:141] libmachine: Using SSH client type: native
	I0429 13:01:52.705821    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
	I0429 13:01:52.705821    3948 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 13:01:52.852582    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 13:01:52.852582    3948 buildroot.go:166] provisioning hostname "multinode-409200-m03"
	I0429 13:01:52.852582    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:01:55.007638    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:01:55.007638    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:55.007995    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:01:57.635762    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:01:57.635762    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:57.642926    3948 main.go:141] libmachine: Using SSH client type: native
	I0429 13:01:57.643021    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
	I0429 13:01:57.643021    3948 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-409200-m03 && echo "multinode-409200-m03" | sudo tee /etc/hostname
	I0429 13:01:57.808523    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-409200-m03
	
	I0429 13:01:57.808523    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:01:59.962927    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:01:59.962927    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:59.963103    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:02:02.584000    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:02:02.584096    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:02.589519    3948 main.go:141] libmachine: Using SSH client type: native
	I0429 13:02:02.590272    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
	I0429 13:02:02.590272    3948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-409200-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-409200-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-409200-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 13:02:02.742009    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:02:02.742122    3948 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 13:02:02.742225    3948 buildroot.go:174] setting up certificates
	I0429 13:02:02.742225    3948 provision.go:84] configureAuth start
	I0429 13:02:02.742256    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:02:04.935391    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:02:04.936414    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:04.936467    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:02:07.542809    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:02:07.543223    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:07.543306    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:02:09.689881    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:02:09.689881    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:09.690129    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:02:12.327163    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:02:12.327974    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:12.327974    3948 provision.go:143] copyHostCerts
	I0429 13:02:12.327974    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 13:02:12.327974    3948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 13:02:12.328539    3948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 13:02:12.328978    3948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 13:02:12.330001    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 13:02:12.330650    3948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 13:02:12.330650    3948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 13:02:12.331140    3948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 13:02:12.332002    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 13:02:12.332002    3948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 13:02:12.332002    3948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 13:02:12.332752    3948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 13:02:12.333757    3948 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-409200-m03 san=[127.0.0.1 172.26.181.104 localhost minikube multinode-409200-m03]
	I0429 13:02:12.412730    3948 provision.go:177] copyRemoteCerts
	I0429 13:02:12.426552    3948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 13:02:12.426552    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:02:14.606263    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:02:14.606263    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:14.606418    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:02:17.285251    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:02:17.285251    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:17.285932    3948 sshutil.go:53] new ssh client: &{IP:172.26.181.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m03\id_rsa Username:docker}
	I0429 13:02:17.410128    3948 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.983538s)
	I0429 13:02:17.410248    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 13:02:17.410740    3948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 13:02:17.476570    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 13:02:17.477029    3948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0429 13:02:17.530806    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 13:02:17.531492    3948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 13:02:17.586703    3948 provision.go:87] duration metric: took 14.8443322s to configureAuth
	I0429 13:02:17.586870    3948 buildroot.go:189] setting minikube options for container-runtime
	I0429 13:02:17.587618    3948 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:02:17.587618    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:02:19.780567    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:02:19.781023    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:19.781023    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:02:22.420229    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:02:22.420229    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:22.428123    3948 main.go:141] libmachine: Using SSH client type: native
	I0429 13:02:22.428985    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
	I0429 13:02:22.428985    3948 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 13:02:22.558727    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 13:02:22.558727    3948 buildroot.go:70] root file system type: tmpfs
	I0429 13:02:22.559260    3948 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 13:02:22.559379    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:02:24.785099    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:02:24.785099    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:24.785099    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:02:27.350424    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:02:27.350424    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:27.356691    3948 main.go:141] libmachine: Using SSH client type: native
	I0429 13:02:27.357371    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
	I0429 13:02:27.357448    3948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 13:02:27.533164    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 13:02:27.533416    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:02:29.673938    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:02:29.674203    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:29.674203    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:02:32.317773    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:02:32.317773    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:32.324667    3948 main.go:141] libmachine: Using SSH client type: native
	I0429 13:02:32.325339    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
	I0429 13:02:32.325339    3948 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 13:02:34.729685    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 13:02:34.729761    3948 machine.go:97] duration metric: took 46.795186s to provisionDockerMachine
	I0429 13:02:34.729761    3948 start.go:293] postStartSetup for "multinode-409200-m03" (driver="hyperv")
	I0429 13:02:34.729933    3948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 13:02:34.743729    3948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 13:02:34.743729    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:02:36.853118    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:02:36.853118    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:36.854026    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:02:39.525607    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:02:39.526407    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:39.526943    3948 sshutil.go:53] new ssh client: &{IP:172.26.181.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m03\id_rsa Username:docker}
	I0429 13:02:39.632510    3948 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8887433s)
	I0429 13:02:39.645560    3948 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 13:02:39.652597    3948 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 13:02:39.652682    3948 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 13:02:39.652834    3948 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 13:02:39.655219    3948 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 13:02:39.655282    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 13:02:39.675297    3948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 13:02:39.694399    3948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 13:02:39.754279    3948 start.go:296] duration metric: took 5.0244796s for postStartSetup
	I0429 13:02:39.754420    3948 fix.go:56] duration metric: took 1m30.1968281s for fixHost
	I0429 13:02:39.754521    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:02:41.886734    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:02:41.886734    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:41.887072    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:02:44.531049    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:02:44.531049    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:44.539969    3948 main.go:141] libmachine: Using SSH client type: native
	I0429 13:02:44.540872    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
	I0429 13:02:44.540872    3948 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 13:02:44.680451    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714395764.672951992
	
	I0429 13:02:44.680451    3948 fix.go:216] guest clock: 1714395764.672951992
	I0429 13:02:44.680451    3948 fix.go:229] Guest: 2024-04-29 13:02:44.672951992 +0000 UTC Remote: 2024-04-29 13:02:39.7545217 +0000 UTC m=+92.568010001 (delta=4.918430292s)
	I0429 13:02:44.680643    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:02:46.852458    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:02:46.852458    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:46.852458    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:02:49.533426    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:02:49.533536    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:49.540051    3948 main.go:141] libmachine: Using SSH client type: native
	I0429 13:02:49.540581    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
	I0429 13:02:49.540721    3948 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714395764
	I0429 13:02:49.693444    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 13:02:44 UTC 2024
	
	I0429 13:02:49.693515    3948 fix.go:236] clock set: Mon Apr 29 13:02:44 UTC 2024
	 (err=<nil>)
	I0429 13:02:49.693565    3948 start.go:83] releasing machines lock for "multinode-409200-m03", held for 1m40.1372922s
	I0429 13:02:49.693843    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:02:51.857599    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:02:51.857599    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:51.857599    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:02:54.453065    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:02:54.453356    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:54.457776    3948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 13:02:54.457942    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:02:54.470792    3948 ssh_runner.go:195] Run: systemctl --version
	I0429 13:02:54.470792    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:02:56.705486    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:02:56.705486    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:56.706322    3948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:02:56.706322    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:56.706322    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:02:56.706404    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:02:59.452766    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:02:59.452856    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:59.453504    3948 sshutil.go:53] new ssh client: &{IP:172.26.181.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m03\id_rsa Username:docker}
	I0429 13:02:59.488377    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:02:59.488761    3948 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:02:59.489482    3948 sshutil.go:53] new ssh client: &{IP:172.26.181.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m03\id_rsa Username:docker}
	I0429 13:02:59.560474    3948 ssh_runner.go:235] Completed: systemctl --version: (5.0896433s)
	I0429 13:02:59.575386    3948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 13:02:59.644447    3948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 13:02:59.644447    3948 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1866313s)
	I0429 13:02:59.657620    3948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 13:02:59.690080    3948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 13:02:59.690080    3948 start.go:494] detecting cgroup driver to use...
	I0429 13:02:59.690080    3948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:02:59.743877    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 13:02:59.781677    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 13:02:59.803160    3948 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 13:02:59.817720    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 13:02:59.859311    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 13:02:59.894960    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 13:02:59.933086    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 13:02:59.976956    3948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 13:03:00.014447    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 13:03:00.055121    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 13:03:00.093592    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 13:03:00.131163    3948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 13:03:00.166769    3948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 13:03:00.206052    3948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:03:00.436295    3948 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 13:03:00.471900    3948 start.go:494] detecting cgroup driver to use...
	I0429 13:03:00.490381    3948 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 13:03:00.532100    3948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:03:00.568386    3948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 13:03:00.622103    3948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:03:00.663322    3948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 13:03:00.707051    3948 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 13:03:00.778444    3948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 13:03:00.804633    3948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:03:00.854861    3948 ssh_runner.go:195] Run: which cri-dockerd
	I0429 13:03:00.874722    3948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 13:03:00.892734    3948 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 13:03:00.941233    3948 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 13:03:01.162035    3948 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 13:03:01.371718    3948 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 13:03:01.372083    3948 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 13:03:01.423667    3948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:03:01.643030    3948 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 13:04:02.791454    3948 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1478976s)
	I0429 13:04:02.804905    3948 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 13:04:02.841240    3948 out.go:177] 
	W0429 13:04:02.844003    3948 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 13:02:32 multinode-409200-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 13:02:32 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:32.993134928Z" level=info msg="Starting up"
	Apr 29 13:02:32 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:32.994010714Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 13:02:32 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:32.995198096Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=667
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.037892273Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068315133Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068397132Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068488831Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068516430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.069583715Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.069633914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.069903410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070193506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070220006Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070234405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070762598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.071561686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075083635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075183534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075367031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075543429Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.076479115Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.076603313Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.076624313Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.086706867Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087016963Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087045762Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087066062Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087083862Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087214060Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087674453Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087767652Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087876750Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087900250Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087945949Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087966249Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087981749Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087999749Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088017348Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088033748Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088057748Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088073548Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088154946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088181746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088199346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088215045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088230045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088245945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088260945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088279745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088296644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088314044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088404843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088433542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088450342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088470442Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088499241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088514941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088531341Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088673339Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088728538Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088745138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088761038Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088853436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088902936Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088920535Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089333629Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089621025Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089696024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089760523Z" level=info msg="containerd successfully booted in 0.055772s"
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.057409084Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.129290410Z" level=info msg="Loading containers: start."
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.484471167Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.575218965Z" level=info msg="Loading containers: done."
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.622435856Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.623353752Z" level=info msg="Daemon has completed initialization"
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.720409023Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 13:02:34 multinode-409200-m03 systemd[1]: Started Docker Application Container Engine.
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.724038707Z" level=info msg="API listen on [::]:2376"
	Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.662862565Z" level=info msg="Processing signal 'terminated'"
	Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.665978194Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 13:03:01 multinode-409200-m03 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.667551908Z" level=info msg="Daemon shutdown complete"
	Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.667788810Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.667796410Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 13:03:02 multinode-409200-m03 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 13:03:02 multinode-409200-m03 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 13:03:02 multinode-409200-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 13:03:02 multinode-409200-m03 dockerd[1041]: time="2024-04-29T13:03:02.755086936Z" level=info msg="Starting up"
	Apr 29 13:04:02 multinode-409200-m03 dockerd[1041]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 13:04:02 multinode-409200-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 13:04:02 multinode-409200-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 13:04:02 multinode-409200-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 13:02:32 multinode-409200-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 13:02:32 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:32.993134928Z" level=info msg="Starting up"
	Apr 29 13:02:32 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:32.994010714Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 13:02:32 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:32.995198096Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=667
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.037892273Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068315133Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068397132Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068488831Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068516430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.069583715Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.069633914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.069903410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070193506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070220006Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070234405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070762598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.071561686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075083635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075183534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075367031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075543429Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.076479115Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.076603313Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.076624313Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.086706867Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087016963Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087045762Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087066062Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087083862Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087214060Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087674453Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087767652Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087876750Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087900250Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087945949Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087966249Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087981749Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087999749Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088017348Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088033748Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088057748Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088073548Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088154946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088181746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088199346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088215045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088230045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088245945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088260945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088279745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088296644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088314044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088404843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088433542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088450342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088470442Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088499241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088514941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088531341Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088673339Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088728538Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088745138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088761038Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088853436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088902936Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088920535Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089333629Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089621025Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089696024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089760523Z" level=info msg="containerd successfully booted in 0.055772s"
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.057409084Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.129290410Z" level=info msg="Loading containers: start."
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.484471167Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.575218965Z" level=info msg="Loading containers: done."
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.622435856Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.623353752Z" level=info msg="Daemon has completed initialization"
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.720409023Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 13:02:34 multinode-409200-m03 systemd[1]: Started Docker Application Container Engine.
	Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.724038707Z" level=info msg="API listen on [::]:2376"
	Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.662862565Z" level=info msg="Processing signal 'terminated'"
	Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.665978194Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 13:03:01 multinode-409200-m03 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.667551908Z" level=info msg="Daemon shutdown complete"
	Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.667788810Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.667796410Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 13:03:02 multinode-409200-m03 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 13:03:02 multinode-409200-m03 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 13:03:02 multinode-409200-m03 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 13:03:02 multinode-409200-m03 dockerd[1041]: time="2024-04-29T13:03:02.755086936Z" level=info msg="Starting up"
	Apr 29 13:04:02 multinode-409200-m03 dockerd[1041]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 13:04:02 multinode-409200-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 13:04:02 multinode-409200-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 13:04:02 multinode-409200-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 13:04:02.844977    3948 out.go:239] * 
	* 
	W0429 13:04:02.870383    3948 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 13:04:02.874117    3948 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: W0429 13:01:07.292487    3948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0429 13:01:07.380855    3948 out.go:291] Setting OutFile to fd 1056 ...
I0429 13:01:07.397603    3948 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 13:01:07.397718    3948 out.go:304] Setting ErrFile to fd 952...
I0429 13:01:07.397718    3948 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 13:01:07.415428    3948 mustload.go:65] Loading cluster: multinode-409200
I0429 13:01:07.416400    3948 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 13:01:07.417427    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:01:09.544768    3948 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0429 13:01:09.544996    3948 main.go:141] libmachine: [stderr =====>] : 
W0429 13:01:09.544996    3948 host.go:58] "multinode-409200-m03" host status: Stopped
I0429 13:01:09.549030    3948 out.go:177] * Starting "multinode-409200-m03" worker node in "multinode-409200" cluster
I0429 13:01:09.552958    3948 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0429 13:01:09.553083    3948 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
I0429 13:01:09.553083    3948 cache.go:56] Caching tarball of preloaded images
I0429 13:01:09.553083    3948 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0429 13:01:09.553083    3948 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0429 13:01:09.554253    3948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
I0429 13:01:09.555503    3948 start.go:360] acquireMachinesLock for multinode-409200-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0429 13:01:09.555503    3948 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-409200-m03"
I0429 13:01:09.556899    3948 start.go:96] Skipping create...Using existing machine configuration
I0429 13:01:09.556899    3948 fix.go:54] fixHost starting: m03
I0429 13:01:09.557205    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:01:11.695055    3948 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0429 13:01:11.695055    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:11.695762    3948 fix.go:112] recreateIfNeeded on multinode-409200-m03: state=Stopped err=<nil>
W0429 13:01:11.695762    3948 fix.go:138] unexpected machine state, will restart: <nil>
I0429 13:01:11.699218    3948 out.go:177] * Restarting existing hyperv VM for "multinode-409200-m03" ...
I0429 13:01:11.701694    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-409200-m03
I0429 13:01:14.875553    3948 main.go:141] libmachine: [stdout =====>] : 
I0429 13:01:14.875553    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:14.876553    3948 main.go:141] libmachine: Waiting for host to start...
I0429 13:01:14.876667    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:01:17.164801    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:01:17.165391    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:17.165391    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:01:19.849442    3948 main.go:141] libmachine: [stdout =====>] : 
I0429 13:01:19.849442    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:20.856427    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:01:23.050432    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:01:23.051429    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:23.051622    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:01:25.657373    3948 main.go:141] libmachine: [stdout =====>] : 
I0429 13:01:25.658317    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:26.668825    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:01:28.876557    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:01:28.876759    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:28.876870    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:01:31.417891    3948 main.go:141] libmachine: [stdout =====>] : 
I0429 13:01:31.418382    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:32.429874    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:01:34.662550    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:01:34.662550    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:34.662550    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:01:37.224548    3948 main.go:141] libmachine: [stdout =====>] : 
I0429 13:01:37.224591    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:38.239358    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:01:40.442937    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:01:40.443476    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:40.443476    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:01:43.095881    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:01:43.095881    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:43.099045    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:01:45.306331    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:01:45.306331    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:45.306331    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:01:47.929503    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:01:47.929503    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:47.930572    3948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
I0429 13:01:47.934215    3948 machine.go:94] provisionDockerMachine start ...
I0429 13:01:47.934215    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:01:50.068043    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:01:50.068043    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:50.068339    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:01:52.699009    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:01:52.699009    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:52.705059    3948 main.go:141] libmachine: Using SSH client type: native
I0429 13:01:52.705821    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
I0429 13:01:52.705821    3948 main.go:141] libmachine: About to run SSH command:
hostname
I0429 13:01:52.852582    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0429 13:01:52.852582    3948 buildroot.go:166] provisioning hostname "multinode-409200-m03"
I0429 13:01:52.852582    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:01:55.007638    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:01:55.007638    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:55.007995    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:01:57.635762    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:01:57.635762    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:57.642926    3948 main.go:141] libmachine: Using SSH client type: native
I0429 13:01:57.643021    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
I0429 13:01:57.643021    3948 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-409200-m03 && echo "multinode-409200-m03" | sudo tee /etc/hostname
I0429 13:01:57.808523    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-409200-m03

                                                
                                                
I0429 13:01:57.808523    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:01:59.962927    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:01:59.962927    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:01:59.963103    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:02:02.584000    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:02:02.584096    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:02.589519    3948 main.go:141] libmachine: Using SSH client type: native
I0429 13:02:02.590272    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
I0429 13:02:02.590272    3948 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-409200-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-409200-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-409200-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0429 13:02:02.742009    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0429 13:02:02.742122    3948 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
I0429 13:02:02.742225    3948 buildroot.go:174] setting up certificates
I0429 13:02:02.742225    3948 provision.go:84] configureAuth start
I0429 13:02:02.742256    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:02:04.935391    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:02:04.936414    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:04.936467    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:02:07.542809    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:02:07.543223    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:07.543306    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:02:09.689881    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:02:09.689881    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:09.690129    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:02:12.327163    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:02:12.327974    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:12.327974    3948 provision.go:143] copyHostCerts
I0429 13:02:12.327974    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
I0429 13:02:12.327974    3948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
I0429 13:02:12.328539    3948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
I0429 13:02:12.328978    3948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
I0429 13:02:12.330001    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
I0429 13:02:12.330650    3948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
I0429 13:02:12.330650    3948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
I0429 13:02:12.331140    3948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
I0429 13:02:12.332002    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
I0429 13:02:12.332002    3948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
I0429 13:02:12.332002    3948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
I0429 13:02:12.332752    3948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
I0429 13:02:12.333757    3948 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-409200-m03 san=[127.0.0.1 172.26.181.104 localhost minikube multinode-409200-m03]
I0429 13:02:12.412730    3948 provision.go:177] copyRemoteCerts
I0429 13:02:12.426552    3948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0429 13:02:12.426552    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:02:14.606263    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:02:14.606263    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:14.606418    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:02:17.285251    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:02:17.285251    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:17.285932    3948 sshutil.go:53] new ssh client: &{IP:172.26.181.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m03\id_rsa Username:docker}
I0429 13:02:17.410128    3948 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.983538s)
I0429 13:02:17.410248    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0429 13:02:17.410740    3948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0429 13:02:17.476570    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0429 13:02:17.477029    3948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
I0429 13:02:17.530806    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0429 13:02:17.531492    3948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0429 13:02:17.586703    3948 provision.go:87] duration metric: took 14.8443322s to configureAuth
I0429 13:02:17.586870    3948 buildroot.go:189] setting minikube options for container-runtime
I0429 13:02:17.587618    3948 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 13:02:17.587618    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:02:19.780567    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:02:19.781023    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:19.781023    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:02:22.420229    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:02:22.420229    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:22.428123    3948 main.go:141] libmachine: Using SSH client type: native
I0429 13:02:22.428985    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
I0429 13:02:22.428985    3948 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0429 13:02:22.558727    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0429 13:02:22.558727    3948 buildroot.go:70] root file system type: tmpfs
I0429 13:02:22.559260    3948 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0429 13:02:22.559379    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:02:24.785099    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:02:24.785099    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:24.785099    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:02:27.350424    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:02:27.350424    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:27.356691    3948 main.go:141] libmachine: Using SSH client type: native
I0429 13:02:27.357371    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
I0429 13:02:27.357448    3948 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0429 13:02:27.533164    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0429 13:02:27.533416    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:02:29.673938    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:02:29.674203    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:29.674203    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:02:32.317773    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:02:32.317773    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:32.324667    3948 main.go:141] libmachine: Using SSH client type: native
I0429 13:02:32.325339    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
I0429 13:02:32.325339    3948 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0429 13:02:34.729685    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0429 13:02:34.729761    3948 machine.go:97] duration metric: took 46.795186s to provisionDockerMachine
I0429 13:02:34.729761    3948 start.go:293] postStartSetup for "multinode-409200-m03" (driver="hyperv")
I0429 13:02:34.729933    3948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0429 13:02:34.743729    3948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0429 13:02:34.743729    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:02:36.853118    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:02:36.853118    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:36.854026    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:02:39.525607    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:02:39.526407    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:39.526943    3948 sshutil.go:53] new ssh client: &{IP:172.26.181.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m03\id_rsa Username:docker}
I0429 13:02:39.632510    3948 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8887433s)
I0429 13:02:39.645560    3948 ssh_runner.go:195] Run: cat /etc/os-release
I0429 13:02:39.652597    3948 info.go:137] Remote host: Buildroot 2023.02.9
I0429 13:02:39.652682    3948 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
I0429 13:02:39.652834    3948 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
I0429 13:02:39.655219    3948 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
I0429 13:02:39.655282    3948 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
I0429 13:02:39.675297    3948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0429 13:02:39.694399    3948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
I0429 13:02:39.754279    3948 start.go:296] duration metric: took 5.0244796s for postStartSetup
I0429 13:02:39.754420    3948 fix.go:56] duration metric: took 1m30.1968281s for fixHost
I0429 13:02:39.754521    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:02:41.886734    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:02:41.886734    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:41.887072    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:02:44.531049    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:02:44.531049    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:44.539969    3948 main.go:141] libmachine: Using SSH client type: native
I0429 13:02:44.540872    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
I0429 13:02:44.540872    3948 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0429 13:02:44.680451    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714395764.672951992

                                                
                                                
I0429 13:02:44.680451    3948 fix.go:216] guest clock: 1714395764.672951992
I0429 13:02:44.680451    3948 fix.go:229] Guest: 2024-04-29 13:02:44.672951992 +0000 UTC Remote: 2024-04-29 13:02:39.7545217 +0000 UTC m=+92.568010001 (delta=4.918430292s)
I0429 13:02:44.680643    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:02:46.852458    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:02:46.852458    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:46.852458    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:02:49.533426    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:02:49.533536    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:49.540051    3948 main.go:141] libmachine: Using SSH client type: native
I0429 13:02:49.540581    3948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.181.104 22 <nil> <nil>}
I0429 13:02:49.540721    3948 main.go:141] libmachine: About to run SSH command:
sudo date -s @1714395764
I0429 13:02:49.693444    3948 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 13:02:44 UTC 2024

                                                
                                                
I0429 13:02:49.693515    3948 fix.go:236] clock set: Mon Apr 29 13:02:44 UTC 2024
(err=<nil>)
I0429 13:02:49.693565    3948 start.go:83] releasing machines lock for "multinode-409200-m03", held for 1m40.1372922s
I0429 13:02:49.693843    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:02:51.857599    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:02:51.857599    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:51.857599    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:02:54.453065    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:02:54.453356    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:54.457776    3948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0429 13:02:54.457942    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:02:54.470792    3948 ssh_runner.go:195] Run: systemctl --version
I0429 13:02:54.470792    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
I0429 13:02:56.705486    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:02:56.705486    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:56.706322    3948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 13:02:56.706322    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:56.706322    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:02:56.706404    3948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
I0429 13:02:59.452766    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:02:59.452856    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:59.453504    3948 sshutil.go:53] new ssh client: &{IP:172.26.181.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m03\id_rsa Username:docker}
I0429 13:02:59.488377    3948 main.go:141] libmachine: [stdout =====>] : 172.26.181.104

                                                
                                                
I0429 13:02:59.488761    3948 main.go:141] libmachine: [stderr =====>] : 
I0429 13:02:59.489482    3948 sshutil.go:53] new ssh client: &{IP:172.26.181.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m03\id_rsa Username:docker}
I0429 13:02:59.560474    3948 ssh_runner.go:235] Completed: systemctl --version: (5.0896433s)
I0429 13:02:59.575386    3948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0429 13:02:59.644447    3948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0429 13:02:59.644447    3948 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1866313s)
I0429 13:02:59.657620    3948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0429 13:02:59.690080    3948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0429 13:02:59.690080    3948 start.go:494] detecting cgroup driver to use...
I0429 13:02:59.690080    3948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0429 13:02:59.743877    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0429 13:02:59.781677    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0429 13:02:59.803160    3948 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0429 13:02:59.817720    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0429 13:02:59.859311    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0429 13:02:59.894960    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0429 13:02:59.933086    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0429 13:02:59.976956    3948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0429 13:03:00.014447    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0429 13:03:00.055121    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0429 13:03:00.093592    3948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0429 13:03:00.131163    3948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0429 13:03:00.166769    3948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0429 13:03:00.206052    3948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0429 13:03:00.436295    3948 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0429 13:03:00.471900    3948 start.go:494] detecting cgroup driver to use...
I0429 13:03:00.490381    3948 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0429 13:03:00.532100    3948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0429 13:03:00.568386    3948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0429 13:03:00.622103    3948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0429 13:03:00.663322    3948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0429 13:03:00.707051    3948 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0429 13:03:00.778444    3948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0429 13:03:00.804633    3948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0429 13:03:00.854861    3948 ssh_runner.go:195] Run: which cri-dockerd
I0429 13:03:00.874722    3948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0429 13:03:00.892734    3948 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0429 13:03:00.941233    3948 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0429 13:03:01.162035    3948 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0429 13:03:01.371718    3948 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0429 13:03:01.372083    3948 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0429 13:03:01.423667    3948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0429 13:03:01.643030    3948 ssh_runner.go:195] Run: sudo systemctl restart docker
I0429 13:04:02.791454    3948 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1478976s)
I0429 13:04:02.804905    3948 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0429 13:04:02.841240    3948 out.go:177] 
W0429 13:04:02.844003    3948 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 29 13:02:32 multinode-409200-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 29 13:02:32 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:32.993134928Z" level=info msg="Starting up"
Apr 29 13:02:32 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:32.994010714Z" level=info msg="containerd not running, starting managed containerd"
Apr 29 13:02:32 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:32.995198096Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=667
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.037892273Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068315133Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068397132Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068488831Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068516430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.069583715Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.069633914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.069903410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070193506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070220006Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070234405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070762598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.071561686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075083635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075183534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075367031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075543429Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.076479115Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.076603313Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.076624313Z" level=info msg="metadata content store policy set" policy=shared
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.086706867Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087016963Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087045762Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087066062Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087083862Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087214060Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087674453Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087767652Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087876750Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087900250Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087945949Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087966249Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087981749Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087999749Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088017348Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088033748Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088057748Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088073548Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088154946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088181746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088199346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088215045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088230045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088245945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088260945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088279745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088296644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088314044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088404843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088433542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088450342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088470442Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088499241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088514941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088531341Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088673339Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088728538Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088745138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088761038Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088853436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088902936Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088920535Z" level=info msg="NRI interface is disabled by configuration."
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089333629Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089621025Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089696024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089760523Z" level=info msg="containerd successfully booted in 0.055772s"
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.057409084Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.129290410Z" level=info msg="Loading containers: start."
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.484471167Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.575218965Z" level=info msg="Loading containers: done."
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.622435856Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.623353752Z" level=info msg="Daemon has completed initialization"
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.720409023Z" level=info msg="API listen on /var/run/docker.sock"
Apr 29 13:02:34 multinode-409200-m03 systemd[1]: Started Docker Application Container Engine.
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.724038707Z" level=info msg="API listen on [::]:2376"
Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.662862565Z" level=info msg="Processing signal 'terminated'"
Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.665978194Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 29 13:03:01 multinode-409200-m03 systemd[1]: Stopping Docker Application Container Engine...
Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.667551908Z" level=info msg="Daemon shutdown complete"
Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.667788810Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.667796410Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 29 13:03:02 multinode-409200-m03 systemd[1]: docker.service: Deactivated successfully.
Apr 29 13:03:02 multinode-409200-m03 systemd[1]: Stopped Docker Application Container Engine.
Apr 29 13:03:02 multinode-409200-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 29 13:03:02 multinode-409200-m03 dockerd[1041]: time="2024-04-29T13:03:02.755086936Z" level=info msg="Starting up"
Apr 29 13:04:02 multinode-409200-m03 dockerd[1041]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 29 13:04:02 multinode-409200-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 29 13:04:02 multinode-409200-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 29 13:04:02 multinode-409200-m03 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 29 13:02:32 multinode-409200-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 29 13:02:32 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:32.993134928Z" level=info msg="Starting up"
Apr 29 13:02:32 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:32.994010714Z" level=info msg="containerd not running, starting managed containerd"
Apr 29 13:02:32 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:32.995198096Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=667
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.037892273Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068315133Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068397132Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068488831Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.068516430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.069583715Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.069633914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.069903410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070193506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070220006Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070234405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.070762598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.071561686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075083635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075183534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075367031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.075543429Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.076479115Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.076603313Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.076624313Z" level=info msg="metadata content store policy set" policy=shared
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.086706867Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087016963Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087045762Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087066062Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087083862Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087214060Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087674453Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087767652Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087876750Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087900250Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087945949Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087966249Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087981749Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.087999749Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088017348Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088033748Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088057748Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088073548Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088154946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088181746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088199346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088215045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088230045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088245945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088260945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088279745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088296644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088314044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088404843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088433542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088450342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088470442Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088499241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088514941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088531341Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088673339Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088728538Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088745138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088761038Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088853436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088902936Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.088920535Z" level=info msg="NRI interface is disabled by configuration."
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089333629Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089621025Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089696024Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 29 13:02:33 multinode-409200-m03 dockerd[667]: time="2024-04-29T13:02:33.089760523Z" level=info msg="containerd successfully booted in 0.055772s"
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.057409084Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.129290410Z" level=info msg="Loading containers: start."
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.484471167Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.575218965Z" level=info msg="Loading containers: done."
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.622435856Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.623353752Z" level=info msg="Daemon has completed initialization"
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.720409023Z" level=info msg="API listen on /var/run/docker.sock"
Apr 29 13:02:34 multinode-409200-m03 systemd[1]: Started Docker Application Container Engine.
Apr 29 13:02:34 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:02:34.724038707Z" level=info msg="API listen on [::]:2376"
Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.662862565Z" level=info msg="Processing signal 'terminated'"
Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.665978194Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 29 13:03:01 multinode-409200-m03 systemd[1]: Stopping Docker Application Container Engine...
Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.667551908Z" level=info msg="Daemon shutdown complete"
Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.667788810Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 29 13:03:01 multinode-409200-m03 dockerd[660]: time="2024-04-29T13:03:01.667796410Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 29 13:03:02 multinode-409200-m03 systemd[1]: docker.service: Deactivated successfully.
Apr 29 13:03:02 multinode-409200-m03 systemd[1]: Stopped Docker Application Container Engine.
Apr 29 13:03:02 multinode-409200-m03 systemd[1]: Starting Docker Application Container Engine...
Apr 29 13:03:02 multinode-409200-m03 dockerd[1041]: time="2024-04-29T13:03:02.755086936Z" level=info msg="Starting up"
Apr 29 13:04:02 multinode-409200-m03 dockerd[1041]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 29 13:04:02 multinode-409200-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 29 13:04:02 multinode-409200-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 29 13:04:02 multinode-409200-m03 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W0429 13:04:02.844977    3948 out.go:239] * 
* 
W0429 13:04:02.870383    3948 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_1.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_1.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0429 13:04:02.874117    3948 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-409200 node start m03 -v=7 --alsologtostderr": exit status 90
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 status -v=7 --alsologtostderr
E0429 13:04:30.746609    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-409200 status -v=7 --alsologtostderr: exit status 2 (36.3082902s)

                                                
                                                
-- stdout --
	multinode-409200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-409200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-409200-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 13:04:03.455692    6352 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 13:04:03.548150    6352 out.go:291] Setting OutFile to fd 1564 ...
	I0429 13:04:03.548919    6352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:04:03.548919    6352 out.go:304] Setting ErrFile to fd 1608...
	I0429 13:04:03.548919    6352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:04:03.564722    6352 out.go:298] Setting JSON to false
	I0429 13:04:03.564722    6352 mustload.go:65] Loading cluster: multinode-409200
	I0429 13:04:03.564722    6352 notify.go:220] Checking for updates...
	I0429 13:04:03.565697    6352 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:04:03.565697    6352 status.go:255] checking status of multinode-409200 ...
	I0429 13:04:03.567035    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:04:05.762299    6352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:05.763190    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:05.763270    6352 status.go:330] multinode-409200 host status = "Running" (err=<nil>)
	I0429 13:04:05.763270    6352 host.go:66] Checking if "multinode-409200" exists ...
	I0429 13:04:05.764249    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:04:08.002089    6352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:08.003248    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:08.003387    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:04:10.624068    6352 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 13:04:10.624068    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:10.624587    6352 host.go:66] Checking if "multinode-409200" exists ...
	I0429 13:04:10.638008    6352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:04:10.638008    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:04:12.784979    6352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:12.784979    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:12.785512    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:04:15.424539    6352 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 13:04:15.424539    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:15.425446    6352 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 13:04:15.531591    6352 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8935458s)
	I0429 13:04:15.545227    6352 ssh_runner.go:195] Run: systemctl --version
	I0429 13:04:15.573493    6352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:04:15.600687    6352 kubeconfig.go:125] found "multinode-409200" server: "https://172.26.185.116:8443"
	I0429 13:04:15.600687    6352 api_server.go:166] Checking apiserver status ...
	I0429 13:04:15.615455    6352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:04:15.664465    6352 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2065/cgroup
	W0429 13:04:15.685719    6352 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2065/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 13:04:15.699192    6352 ssh_runner.go:195] Run: ls
	I0429 13:04:15.706795    6352 api_server.go:253] Checking apiserver healthz at https://172.26.185.116:8443/healthz ...
	I0429 13:04:15.714593    6352 api_server.go:279] https://172.26.185.116:8443/healthz returned 200:
	ok
	I0429 13:04:15.714593    6352 status.go:422] multinode-409200 apiserver status = Running (err=<nil>)
	I0429 13:04:15.714593    6352 status.go:257] multinode-409200 status: &{Name:multinode-409200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:04:15.715458    6352 status.go:255] checking status of multinode-409200-m02 ...
	I0429 13:04:15.716457    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:04:17.873603    6352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:17.873603    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:17.873603    6352 status.go:330] multinode-409200-m02 host status = "Running" (err=<nil>)
	I0429 13:04:17.873603    6352 host.go:66] Checking if "multinode-409200-m02" exists ...
	I0429 13:04:17.874279    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:04:20.090491    6352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:20.090816    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:20.090816    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 13:04:22.746008    6352 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 13:04:22.746008    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:22.746094    6352 host.go:66] Checking if "multinode-409200-m02" exists ...
	I0429 13:04:22.761297    6352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:04:22.761297    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:04:24.895039    6352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:24.896039    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:24.896160    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 13:04:27.499691    6352 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 13:04:27.499691    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:27.500905    6352 sshutil.go:53] new ssh client: &{IP:172.26.183.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\id_rsa Username:docker}
	I0429 13:04:27.605425    6352 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8440912s)
	I0429 13:04:27.619236    6352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:04:27.646767    6352 status.go:257] multinode-409200-m02 status: &{Name:multinode-409200-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:04:27.646861    6352 status.go:255] checking status of multinode-409200-m03 ...
	I0429 13:04:27.647655    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:04:29.829291    6352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:29.829291    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:29.830263    6352 status.go:330] multinode-409200-m03 host status = "Running" (err=<nil>)
	I0429 13:04:29.830307    6352 host.go:66] Checking if "multinode-409200-m03" exists ...
	I0429 13:04:29.830884    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:04:32.013391    6352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:32.013391    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:32.014254    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:04:34.617644    6352 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:04:34.618599    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:34.618717    6352 host.go:66] Checking if "multinode-409200-m03" exists ...
	I0429 13:04:34.634240    6352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:04:34.634240    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:04:36.810453    6352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:36.811564    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:36.811564    6352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:04:39.441651    6352 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:04:39.441651    6352 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:39.442271    6352 sshutil.go:53] new ssh client: &{IP:172.26.181.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m03\id_rsa Username:docker}
	I0429 13:04:39.538919    6352 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9046419s)
	I0429 13:04:39.554545    6352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:04:39.581774    6352 status.go:257] multinode-409200-m03 status: &{Name:multinode-409200-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-409200 status -v=7 --alsologtostderr: exit status 2 (36.05671s)

                                                
                                                
-- stdout --
	multinode-409200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-409200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-409200-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 13:04:41.091431   13924 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 13:04:41.178530   13924 out.go:291] Setting OutFile to fd 664 ...
	I0429 13:04:41.178892   13924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:04:41.178892   13924 out.go:304] Setting ErrFile to fd 1500...
	I0429 13:04:41.179531   13924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:04:41.197122   13924 out.go:298] Setting JSON to false
	I0429 13:04:41.197122   13924 mustload.go:65] Loading cluster: multinode-409200
	I0429 13:04:41.197122   13924 notify.go:220] Checking for updates...
	I0429 13:04:41.198326   13924 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:04:41.198412   13924 status.go:255] checking status of multinode-409200 ...
	I0429 13:04:41.201984   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:04:43.348215   13924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:43.348215   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:43.348215   13924 status.go:330] multinode-409200 host status = "Running" (err=<nil>)
	I0429 13:04:43.348215   13924 host.go:66] Checking if "multinode-409200" exists ...
	I0429 13:04:43.350680   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:04:45.529736   13924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:45.530495   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:45.530952   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:04:48.177430   13924 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 13:04:48.177430   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:48.177516   13924 host.go:66] Checking if "multinode-409200" exists ...
	I0429 13:04:48.191234   13924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:04:48.191234   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:04:50.359447   13924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:50.359447   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:50.359447   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:04:52.990511   13924 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 13:04:52.990511   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:52.990511   13924 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 13:04:53.092680   13924 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9014089s)
	I0429 13:04:53.105947   13924 ssh_runner.go:195] Run: systemctl --version
	I0429 13:04:53.130697   13924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:04:53.159429   13924 kubeconfig.go:125] found "multinode-409200" server: "https://172.26.185.116:8443"
	I0429 13:04:53.159429   13924 api_server.go:166] Checking apiserver status ...
	I0429 13:04:53.174620   13924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:04:53.220691   13924 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2065/cgroup
	W0429 13:04:53.243314   13924 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2065/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 13:04:53.258649   13924 ssh_runner.go:195] Run: ls
	I0429 13:04:53.267836   13924 api_server.go:253] Checking apiserver healthz at https://172.26.185.116:8443/healthz ...
	I0429 13:04:53.276901   13924 api_server.go:279] https://172.26.185.116:8443/healthz returned 200:
	ok
	I0429 13:04:53.276901   13924 status.go:422] multinode-409200 apiserver status = Running (err=<nil>)
	I0429 13:04:53.276901   13924 status.go:257] multinode-409200 status: &{Name:multinode-409200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:04:53.276901   13924 status.go:255] checking status of multinode-409200-m02 ...
	I0429 13:04:53.276901   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:04:55.463739   13924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:55.463850   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:55.463850   13924 status.go:330] multinode-409200-m02 host status = "Running" (err=<nil>)
	I0429 13:04:55.463850   13924 host.go:66] Checking if "multinode-409200-m02" exists ...
	I0429 13:04:55.464692   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:04:57.635122   13924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:04:57.635122   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:04:57.636157   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 13:05:00.208816   13924 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 13:05:00.209114   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:05:00.209114   13924 host.go:66] Checking if "multinode-409200-m02" exists ...
	I0429 13:05:00.226686   13924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:05:00.226686   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:05:02.332825   13924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:05:02.332825   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:05:02.333981   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 13:05:04.936256   13924 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 13:05:04.936256   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:05:04.937043   13924 sshutil.go:53] new ssh client: &{IP:172.26.183.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\id_rsa Username:docker}
	I0429 13:05:05.035361   13924 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8079923s)
	I0429 13:05:05.049848   13924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:05:05.077135   13924 status.go:257] multinode-409200-m02 status: &{Name:multinode-409200-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:05:05.077255   13924 status.go:255] checking status of multinode-409200-m03 ...
	I0429 13:05:05.078037   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:05:07.186636   13924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:05:07.186789   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:05:07.186961   13924 status.go:330] multinode-409200-m03 host status = "Running" (err=<nil>)
	I0429 13:05:07.186961   13924 host.go:66] Checking if "multinode-409200-m03" exists ...
	I0429 13:05:07.187723   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:05:09.352542   13924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:05:09.352542   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:05:09.352903   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:05:11.963570   13924 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:05:11.964434   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:05:11.964434   13924 host.go:66] Checking if "multinode-409200-m03" exists ...
	I0429 13:05:11.977700   13924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:05:11.977700   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:05:14.182675   13924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:05:14.182709   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:05:14.182709   13924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 13:05:16.846985   13924 main.go:141] libmachine: [stdout =====>] : 172.26.181.104
	
	I0429 13:05:16.846985   13924 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:05:16.847866   13924 sshutil.go:53] new ssh client: &{IP:172.26.181.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m03\id_rsa Username:docker}
	I0429 13:05:16.947967   13924 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9702292s)
	I0429 13:05:16.962229   13924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:05:16.989271   13924 status.go:257] multinode-409200-m03 status: &{Name:multinode-409200-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-409200 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-409200 -n multinode-409200
E0429 13:05:28.004298    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-409200 -n multinode-409200: (12.2888544s)
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 logs -n 25: (8.9348998s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-409200 cp multinode-409200:/home/docker/cp-test.txt                                                            | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:55 UTC | 29 Apr 24 12:55 UTC |
	|         | multinode-409200-m03:/home/docker/cp-test_multinode-409200_multinode-409200-m03.txt                                      |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:55 UTC | 29 Apr 24 12:55 UTC |
	|         | multinode-409200 sudo cat                                                                                                |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n multinode-409200-m03 sudo cat                                                                    | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:56 UTC | 29 Apr 24 12:56 UTC |
	|         | /home/docker/cp-test_multinode-409200_multinode-409200-m03.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp testdata\cp-test.txt                                                                                 | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:56 UTC | 29 Apr 24 12:56 UTC |
	|         | multinode-409200-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:56 UTC | 29 Apr 24 12:56 UTC |
	|         | multinode-409200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp multinode-409200-m02:/home/docker/cp-test.txt                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:56 UTC | 29 Apr 24 12:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2311671446\001\cp-test_multinode-409200-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:56 UTC | 29 Apr 24 12:56 UTC |
	|         | multinode-409200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp multinode-409200-m02:/home/docker/cp-test.txt                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:56 UTC | 29 Apr 24 12:57 UTC |
	|         | multinode-409200:/home/docker/cp-test_multinode-409200-m02_multinode-409200.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 12:57 UTC |
	|         | multinode-409200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n multinode-409200 sudo cat                                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-409200-m02_multinode-409200.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp multinode-409200-m02:/home/docker/cp-test.txt                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 12:57 UTC |
	|         | multinode-409200-m03:/home/docker/cp-test_multinode-409200-m02_multinode-409200-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 12:57 UTC |
	|         | multinode-409200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n multinode-409200-m03 sudo cat                                                                    | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-409200-m02_multinode-409200-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp testdata\cp-test.txt                                                                                 | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 12:58 UTC |
	|         | multinode-409200-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:58 UTC | 29 Apr 24 12:58 UTC |
	|         | multinode-409200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp multinode-409200-m03:/home/docker/cp-test.txt                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:58 UTC | 29 Apr 24 12:58 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2311671446\001\cp-test_multinode-409200-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:58 UTC | 29 Apr 24 12:58 UTC |
	|         | multinode-409200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp multinode-409200-m03:/home/docker/cp-test.txt                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:58 UTC | 29 Apr 24 12:58 UTC |
	|         | multinode-409200:/home/docker/cp-test_multinode-409200-m03_multinode-409200.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:58 UTC | 29 Apr 24 12:59 UTC |
	|         | multinode-409200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n multinode-409200 sudo cat                                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:59 UTC | 29 Apr 24 12:59 UTC |
	|         | /home/docker/cp-test_multinode-409200-m03_multinode-409200.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp multinode-409200-m03:/home/docker/cp-test.txt                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:59 UTC | 29 Apr 24 12:59 UTC |
	|         | multinode-409200-m02:/home/docker/cp-test_multinode-409200-m03_multinode-409200-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:59 UTC | 29 Apr 24 12:59 UTC |
	|         | multinode-409200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n multinode-409200-m02 sudo cat                                                                    | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:59 UTC | 29 Apr 24 12:59 UTC |
	|         | /home/docker/cp-test_multinode-409200-m03_multinode-409200-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-409200 node stop m03                                                                                           | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:59 UTC | 29 Apr 24 13:00 UTC |
	| node    | multinode-409200 node start                                                                                              | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 13:01 UTC |                     |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:41:24
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:41:24.071859    3296 out.go:291] Setting OutFile to fd 1376 ...
	I0429 12:41:24.072685    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:41:24.072685    3296 out.go:304] Setting ErrFile to fd 1392...
	I0429 12:41:24.072685    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:41:24.098316    3296 out.go:298] Setting JSON to false
	I0429 12:41:24.101035    3296 start.go:129] hostinfo: {"hostname":"minikube6","uptime":35956,"bootTime":1714358527,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 12:41:24.102029    3296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 12:41:24.108002    3296 out.go:177] * [multinode-409200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 12:41:24.112063    3296 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 12:41:24.112063    3296 notify.go:220] Checking for updates...
	I0429 12:41:24.115983    3296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:41:24.117816    3296 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 12:41:24.120931    3296 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 12:41:24.123137    3296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:41:24.126348    3296 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:41:24.126348    3296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:41:29.691313    3296 out.go:177] * Using the hyperv driver based on user configuration
	I0429 12:41:29.694806    3296 start.go:297] selected driver: hyperv
	I0429 12:41:29.694898    3296 start.go:901] validating driver "hyperv" against <nil>
	I0429 12:41:29.694898    3296 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:41:29.750099    3296 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:41:29.750905    3296 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:41:29.751474    3296 cni.go:84] Creating CNI manager for ""
	I0429 12:41:29.751474    3296 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 12:41:29.751474    3296 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 12:41:29.751786    3296 start.go:340] cluster config:
	{Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:41:29.751786    3296 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:41:29.758943    3296 out.go:177] * Starting "multinode-409200" primary control-plane node in "multinode-409200" cluster
	I0429 12:41:29.762713    3296 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:41:29.762713    3296 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 12:41:29.762713    3296 cache.go:56] Caching tarball of preloaded images
	I0429 12:41:29.762713    3296 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:41:29.763382    3296 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:41:29.763583    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 12:41:29.763583    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json: {Name:mkf8183664b98a8e3f56b1e9ae3d2d10f3e06326 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:41:29.764783    3296 start.go:360] acquireMachinesLock for multinode-409200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:41:29.765361    3296 start.go:364] duration metric: took 537.1µs to acquireMachinesLock for "multinode-409200"
	I0429 12:41:29.765392    3296 start.go:93] Provisioning new machine with config: &{Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:41:29.765392    3296 start.go:125] createHost starting for "" (driver="hyperv")
	I0429 12:41:29.769708    3296 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:41:29.769909    3296 start.go:159] libmachine.API.Create for "multinode-409200" (driver="hyperv")
	I0429 12:41:29.769909    3296 client.go:168] LocalClient.Create starting
	I0429 12:41:29.770627    3296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 12:41:29.770627    3296 main.go:141] libmachine: Decoding PEM data...
	I0429 12:41:29.770627    3296 main.go:141] libmachine: Parsing certificate...
	I0429 12:41:29.770627    3296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 12:41:29.771491    3296 main.go:141] libmachine: Decoding PEM data...
	I0429 12:41:29.771491    3296 main.go:141] libmachine: Parsing certificate...
	I0429 12:41:29.771491    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 12:41:31.935654    3296 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 12:41:31.936054    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:31.936157    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 12:41:33.722970    3296 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 12:41:33.722970    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:33.723246    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 12:41:35.256783    3296 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 12:41:35.256783    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:35.257935    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 12:41:38.873395    3296 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 12:41:38.873395    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:38.876182    3296 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:41:39.387931    3296 main.go:141] libmachine: Creating SSH key...
	I0429 12:41:39.546045    3296 main.go:141] libmachine: Creating VM...
	I0429 12:41:39.546173    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 12:41:42.449474    3296 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 12:41:42.449474    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:42.449474    3296 main.go:141] libmachine: Using switch "Default Switch"
	I0429 12:41:42.452970    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 12:41:44.272448    3296 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 12:41:44.273105    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:44.273105    3296 main.go:141] libmachine: Creating VHD
	I0429 12:41:44.273238    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 12:41:47.975205    3296 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F97E0AA5-FA51-469C-8B71-A632009B8D6A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 12:41:47.976124    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:47.976156    3296 main.go:141] libmachine: Writing magic tar header
	I0429 12:41:47.976156    3296 main.go:141] libmachine: Writing SSH key tar header
	I0429 12:41:47.986236    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 12:41:51.133276    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:41:51.133501    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:51.133501    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\disk.vhd' -SizeBytes 20000MB
	I0429 12:41:53.639402    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:41:53.639402    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:53.640023    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-409200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 12:41:57.400298    3296 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-409200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 12:41:57.400573    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:57.400573    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-409200 -DynamicMemoryEnabled $false
	I0429 12:41:59.692823    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:41:59.692823    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:41:59.692823    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-409200 -Count 2
	I0429 12:42:01.887698    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:01.887698    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:01.887837    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-409200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\boot2docker.iso'
	I0429 12:42:04.494718    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:04.495429    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:04.495717    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-409200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\disk.vhd'
	I0429 12:42:07.155562    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:07.155562    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:07.155562    3296 main.go:141] libmachine: Starting VM...
	I0429 12:42:07.155562    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-409200
	I0429 12:42:10.190311    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:10.190311    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:10.191327    3296 main.go:141] libmachine: Waiting for host to start...
	I0429 12:42:10.191327    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:12.498310    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:12.499114    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:12.499174    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:15.123193    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:15.123539    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:16.125896    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:18.339192    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:18.339192    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:18.339501    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:20.940949    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:20.940949    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:21.943094    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:24.162676    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:24.162676    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:24.162828    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:26.695989    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:26.696067    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:27.696767    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:29.911560    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:29.912251    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:29.912399    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:32.458187    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:42:32.458475    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:33.461544    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:35.662693    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:35.662936    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:35.663029    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:38.281947    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:42:38.282170    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:38.282170    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:40.427474    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:40.427474    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:40.427552    3296 machine.go:94] provisionDockerMachine start ...
	I0429 12:42:40.427698    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:42.651318    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:42.651847    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:42.651847    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:45.312059    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:42:45.312723    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:45.319337    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:42:45.332543    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:42:45.332543    3296 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 12:42:45.454107    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 12:42:45.454107    3296 buildroot.go:166] provisioning hostname "multinode-409200"
	I0429 12:42:45.454107    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:47.620360    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:47.620360    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:47.620360    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:50.273859    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:42:50.273859    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:50.282260    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:42:50.282787    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:42:50.283030    3296 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-409200 && echo "multinode-409200" | sudo tee /etc/hostname
	I0429 12:42:50.450202    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-409200
	
	I0429 12:42:50.450202    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:52.617897    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:52.617980    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:52.617980    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:42:55.249656    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:42:55.250533    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:55.254727    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:42:55.255866    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:42:55.255866    3296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-409200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-409200/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-409200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:42:55.394645    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:42:55.394645    3296 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 12:42:55.394645    3296 buildroot.go:174] setting up certificates
	I0429 12:42:55.394645    3296 provision.go:84] configureAuth start
	I0429 12:42:55.394645    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:42:57.543276    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:42:57.543276    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:42:57.543379    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:00.118356    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:00.119200    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:00.119200    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:02.260662    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:02.261622    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:02.261691    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:04.839372    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:04.839909    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:04.839909    3296 provision.go:143] copyHostCerts
	I0429 12:43:04.839909    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 12:43:04.839909    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 12:43:04.839909    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 12:43:04.840902    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 12:43:04.841954    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 12:43:04.842681    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 12:43:04.842681    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 12:43:04.842890    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 12:43:04.844022    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 12:43:04.844108    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 12:43:04.844108    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 12:43:04.844646    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 12:43:04.845317    3296 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-409200 san=[127.0.0.1 172.26.185.116 localhost minikube multinode-409200]
	I0429 12:43:05.203469    3296 provision.go:177] copyRemoteCerts
	I0429 12:43:05.217479    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:43:05.217479    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:07.318983    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:07.318983    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:07.319302    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:09.898054    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:09.898054    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:09.898952    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:43:09.997063    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7795466s)
	I0429 12:43:09.997124    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 12:43:09.997764    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 12:43:10.047385    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 12:43:10.047970    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 12:43:10.097809    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 12:43:10.098469    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 12:43:10.147392    3296 provision.go:87] duration metric: took 14.752632s to configureAuth
	I0429 12:43:10.147544    3296 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:43:10.148090    3296 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:43:10.148180    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:12.343126    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:12.343461    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:12.343550    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:14.975410    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:14.975410    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:14.981555    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:43:14.982278    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:43:14.982278    3296 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:43:15.110028    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:43:15.110028    3296 buildroot.go:70] root file system type: tmpfs
	I0429 12:43:15.110028    3296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:43:15.110028    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:17.280970    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:17.280970    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:17.281792    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:19.907151    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:19.907292    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:19.913390    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:43:19.914028    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:43:19.914121    3296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:43:20.069774    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:43:20.069774    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:22.193863    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:22.194959    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:22.194995    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:24.736866    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:24.736866    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:24.744211    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:43:24.744211    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:43:24.744211    3296 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:43:26.935989    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:43:26.935989    3296 machine.go:97] duration metric: took 46.5080745s to provisionDockerMachine
	I0429 12:43:26.935989    3296 client.go:171] duration metric: took 1m57.1651667s to LocalClient.Create
	I0429 12:43:26.935989    3296 start.go:167] duration metric: took 1m57.1651667s to libmachine.API.Create "multinode-409200"
	I0429 12:43:26.935989    3296 start.go:293] postStartSetup for "multinode-409200" (driver="hyperv")
	I0429 12:43:26.936526    3296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:43:26.950981    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:43:26.950981    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:29.014332    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:29.014507    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:29.014590    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:31.564952    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:31.564952    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:31.565713    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:43:31.666721    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7157042s)
	I0429 12:43:31.680632    3296 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:43:31.688804    3296 command_runner.go:130] > NAME=Buildroot
	I0429 12:43:31.688804    3296 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 12:43:31.688804    3296 command_runner.go:130] > ID=buildroot
	I0429 12:43:31.688804    3296 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 12:43:31.688804    3296 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 12:43:31.688804    3296 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:43:31.688804    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 12:43:31.689611    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 12:43:31.690672    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 12:43:31.690778    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 12:43:31.703066    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:43:31.729604    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 12:43:31.780229    3296 start.go:296] duration metric: took 4.844202s for postStartSetup
	I0429 12:43:31.784136    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:33.908553    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:33.909388    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:33.909388    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:36.459415    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:36.459415    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:36.460347    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 12:43:36.463566    3296 start.go:128] duration metric: took 2m6.6971858s to createHost
	I0429 12:43:36.463729    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:38.546973    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:38.546973    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:38.548012    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:41.045793    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:41.045793    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:41.054379    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:43:41.055273    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:43:41.055273    3296 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:43:41.191523    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714394621.194598920
	
	I0429 12:43:41.192059    3296 fix.go:216] guest clock: 1714394621.194598920
	I0429 12:43:41.192059    3296 fix.go:229] Guest: 2024-04-29 12:43:41.19459892 +0000 UTC Remote: 2024-04-29 12:43:36.4636493 +0000 UTC m=+132.586901101 (delta=4.73094962s)
	I0429 12:43:41.192228    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:43.353158    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:43.353158    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:43.353419    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:45.947951    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:45.947951    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:45.954725    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:43:45.955446    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.185.116 22 <nil> <nil>}
	I0429 12:43:45.955446    3296 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714394621
	I0429 12:43:46.089226    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 12:43:41 UTC 2024
	
	I0429 12:43:46.089226    3296 fix.go:236] clock set: Mon Apr 29 12:43:41 UTC 2024
	 (err=<nil>)
	I0429 12:43:46.089226    3296 start.go:83] releasing machines lock for "multinode-409200", held for 2m16.3227711s
	I0429 12:43:46.089226    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:48.230317    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:48.230419    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:48.230483    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:50.802602    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:50.802840    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:50.807572    3296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:43:50.807716    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:50.825417    3296 ssh_runner.go:195] Run: cat /version.json
	I0429 12:43:50.825524    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:43:53.002688    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:53.002968    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:53.002968    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:53.050817    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:43:53.051493    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:53.051493    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:43:55.699871    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:55.699967    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:55.700043    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:43:55.724091    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:43:55.724091    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:43:55.724091    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:43:55.795365    3296 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 12:43:55.795521    3296 ssh_runner.go:235] Completed: cat /version.json: (4.9699587s)
	I0429 12:43:55.813261    3296 ssh_runner.go:195] Run: systemctl --version
	I0429 12:43:55.899405    3296 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 12:43:55.899405    3296 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0917933s)
	I0429 12:43:55.899405    3296 command_runner.go:130] > systemd 252 (252)
	I0429 12:43:55.899552    3296 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 12:43:55.912945    3296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 12:43:55.922181    3296 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 12:43:55.922699    3296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:43:55.936091    3296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:43:55.965604    3296 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 12:43:55.966233    3296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:43:55.966284    3296 start.go:494] detecting cgroup driver to use...
	I0429 12:43:55.966319    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:43:56.001475    3296 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 12:43:56.015262    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:43:56.047945    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:43:56.070384    3296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:43:56.083490    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:43:56.120537    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:43:56.154883    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:43:56.188316    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:43:56.223442    3296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:43:56.258876    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:43:56.294527    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:43:56.327102    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:43:56.360132    3296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:43:56.378154    3296 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 12:43:56.390095    3296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:43:56.422878    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:43:56.636016    3296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:43:56.671057    3296 start.go:494] detecting cgroup driver to use...
	I0429 12:43:56.683519    3296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:43:56.709138    3296 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 12:43:56.709204    3296 command_runner.go:130] > [Unit]
	I0429 12:43:56.709204    3296 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 12:43:56.709204    3296 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 12:43:56.709204    3296 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 12:43:56.709204    3296 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 12:43:56.709204    3296 command_runner.go:130] > StartLimitBurst=3
	I0429 12:43:56.709204    3296 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 12:43:56.709204    3296 command_runner.go:130] > [Service]
	I0429 12:43:56.709204    3296 command_runner.go:130] > Type=notify
	I0429 12:43:56.709204    3296 command_runner.go:130] > Restart=on-failure
	I0429 12:43:56.709204    3296 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 12:43:56.709204    3296 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 12:43:56.709204    3296 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 12:43:56.709204    3296 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 12:43:56.709204    3296 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 12:43:56.709204    3296 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 12:43:56.709204    3296 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 12:43:56.709204    3296 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 12:43:56.709204    3296 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 12:43:56.709204    3296 command_runner.go:130] > ExecStart=
	I0429 12:43:56.709204    3296 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 12:43:56.709204    3296 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 12:43:56.709204    3296 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 12:43:56.709204    3296 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 12:43:56.709204    3296 command_runner.go:130] > LimitNOFILE=infinity
	I0429 12:43:56.709204    3296 command_runner.go:130] > LimitNPROC=infinity
	I0429 12:43:56.709204    3296 command_runner.go:130] > LimitCORE=infinity
	I0429 12:43:56.709204    3296 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 12:43:56.709204    3296 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 12:43:56.709204    3296 command_runner.go:130] > TasksMax=infinity
	I0429 12:43:56.709204    3296 command_runner.go:130] > TimeoutStartSec=0
	I0429 12:43:56.709204    3296 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 12:43:56.709204    3296 command_runner.go:130] > Delegate=yes
	I0429 12:43:56.709204    3296 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 12:43:56.709204    3296 command_runner.go:130] > KillMode=process
	I0429 12:43:56.709204    3296 command_runner.go:130] > [Install]
	I0429 12:43:56.709204    3296 command_runner.go:130] > WantedBy=multi-user.target
	I0429 12:43:56.724078    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:43:56.760055    3296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:43:56.804342    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:43:56.841223    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:43:56.879244    3296 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:43:56.945463    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:43:56.969681    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:43:57.013978    3296 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 12:43:57.026023    3296 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:43:57.032826    3296 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 12:43:57.044975    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:43:57.063795    3296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:43:57.110207    3296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:43:57.317699    3296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:43:57.510634    3296 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:43:57.510894    3296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:43:57.561438    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:43:57.760225    3296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:44:00.316595    3296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5563503s)
	I0429 12:44:00.335164    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 12:44:00.373198    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:44:00.408144    3296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 12:44:00.623820    3296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 12:44:00.830313    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:44:01.044370    3296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 12:44:01.092258    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:44:01.128927    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:44:01.339615    3296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 12:44:01.448476    3296 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 12:44:01.462973    3296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 12:44:01.471348    3296 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 12:44:01.472099    3296 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 12:44:01.472099    3296 command_runner.go:130] > Device: 0,22	Inode: 885         Links: 1
	I0429 12:44:01.472099    3296 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 12:44:01.472099    3296 command_runner.go:130] > Access: 2024-04-29 12:44:01.364927212 +0000
	I0429 12:44:01.472099    3296 command_runner.go:130] > Modify: 2024-04-29 12:44:01.364927212 +0000
	I0429 12:44:01.472099    3296 command_runner.go:130] > Change: 2024-04-29 12:44:01.368927212 +0000
	I0429 12:44:01.472099    3296 command_runner.go:130] >  Birth: -
	I0429 12:44:01.472099    3296 start.go:562] Will wait 60s for crictl version
	I0429 12:44:01.486507    3296 ssh_runner.go:195] Run: which crictl
	I0429 12:44:01.492886    3296 command_runner.go:130] > /usr/bin/crictl
	I0429 12:44:01.507784    3296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:44:01.570510    3296 command_runner.go:130] > Version:  0.1.0
	I0429 12:44:01.570510    3296 command_runner.go:130] > RuntimeName:  docker
	I0429 12:44:01.570510    3296 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 12:44:01.570510    3296 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 12:44:01.570510    3296 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 12:44:01.581217    3296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:44:01.613056    3296 command_runner.go:130] > 26.0.2
	I0429 12:44:01.624406    3296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:44:01.656934    3296 command_runner.go:130] > 26.0.2
	I0429 12:44:01.665478    3296 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 12:44:01.665478    3296 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 12:44:01.669659    3296 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 12:44:01.669659    3296 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 12:44:01.669659    3296 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 12:44:01.669659    3296 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:da:8e:53 Flags:up|broadcast|multicast|running}
	I0429 12:44:01.672596    3296 ip.go:210] interface addr: fe80::e4d4:6d70:21fb:68f3/64
	I0429 12:44:01.672596    3296 ip.go:210] interface addr: 172.26.176.1/20
	I0429 12:44:01.687109    3296 ssh_runner.go:195] Run: grep 172.26.176.1	host.minikube.internal$ /etc/hosts
	I0429 12:44:01.693840    3296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:44:01.717667    3296 kubeadm.go:877] updating cluster {Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 12:44:01.717874    3296 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:44:01.729576    3296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:44:01.753147    3296 docker.go:685] Got preloaded images: 
	I0429 12:44:01.753147    3296 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 12:44:01.767100    3296 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:44:01.785012    3296 command_runner.go:139] > {"Repositories":{}}
	I0429 12:44:01.798929    3296 ssh_runner.go:195] Run: which lz4
	I0429 12:44:01.805633    3296 command_runner.go:130] > /usr/bin/lz4
	I0429 12:44:01.805633    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 12:44:01.819826    3296 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 12:44:01.825751    3296 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 12:44:01.826519    3296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 12:44:01.826519    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 12:44:03.851411    3296 docker.go:649] duration metric: took 2.0457613s to copy over tarball
	I0429 12:44:03.867833    3296 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 12:44:12.753232    3296 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.8852851s)
	I0429 12:44:12.753314    3296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 12:44:12.822086    3296 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:44:12.840727    3296 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0429 12:44:12.841096    3296 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 12:44:12.894613    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:44:13.126976    3296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:44:16.488560    3296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3606689s)
	I0429 12:44:16.498170    3296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 12:44:16.525752    3296 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 12:44:16.525752    3296 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 12:44:16.525752    3296 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 12:44:16.525752    3296 cache_images.go:84] Images are preloaded, skipping loading
	I0429 12:44:16.525752    3296 kubeadm.go:928] updating node { 172.26.185.116 8443 v1.30.0 docker true true} ...
	I0429 12:44:16.525752    3296 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-409200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.185.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:44:16.535787    3296 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 12:44:16.573357    3296 command_runner.go:130] > cgroupfs
	I0429 12:44:16.574213    3296 cni.go:84] Creating CNI manager for ""
	I0429 12:44:16.574213    3296 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:44:16.574304    3296 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 12:44:16.574304    3296 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.185.116 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-409200 NodeName:multinode-409200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.185.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.185.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 12:44:16.574671    3296 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.185.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-409200"
	  kubeletExtraArgs:
	    node-ip: 172.26.185.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.185.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 12:44:16.587109    3296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:44:16.607116    3296 command_runner.go:130] > kubeadm
	I0429 12:44:16.607116    3296 command_runner.go:130] > kubectl
	I0429 12:44:16.607116    3296 command_runner.go:130] > kubelet
	I0429 12:44:16.607486    3296 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 12:44:16.619346    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 12:44:16.637355    3296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0429 12:44:16.671622    3296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:44:16.704800    3296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0429 12:44:16.752839    3296 ssh_runner.go:195] Run: grep 172.26.185.116	control-plane.minikube.internal$ /etc/hosts
	I0429 12:44:16.760084    3296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.185.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:44:16.797647    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:44:17.006548    3296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:44:17.033894    3296 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200 for IP: 172.26.185.116
	I0429 12:44:17.033894    3296 certs.go:194] generating shared ca certs ...
	I0429 12:44:17.034052    3296 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.034597    3296 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 12:44:17.035031    3296 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 12:44:17.035221    3296 certs.go:256] generating profile certs ...
	I0429 12:44:17.036085    3296 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.key
	I0429 12:44:17.036211    3296 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.crt with IP's: []
	I0429 12:44:17.301116    3296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.crt ...
	I0429 12:44:17.302129    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.crt: {Name:mkfee835225f0dcf0ca6b08c61d512a13d0301a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.303376    3296 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.key ...
	I0429 12:44:17.303376    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.key: {Name:mk4d7a0cb775c99aef602c36f31814957f63535b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.304404    3296 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.09b66b62
	I0429 12:44:17.304404    3296 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.09b66b62 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.26.185.116]
	I0429 12:44:17.586870    3296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.09b66b62 ...
	I0429 12:44:17.586870    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.09b66b62: {Name:mk0a0a342ca8f742883109c474511a24825717f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.588592    3296 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.09b66b62 ...
	I0429 12:44:17.588592    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.09b66b62: {Name:mk32857892243135c3cbfe168f73f05a5d58da10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.589224    3296 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.09b66b62 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt
	I0429 12:44:17.606459    3296 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.09b66b62 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key
	I0429 12:44:17.607893    3296 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key
	I0429 12:44:17.608084    3296 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt with IP's: []
	I0429 12:44:17.874409    3296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt ...
	I0429 12:44:17.874409    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt: {Name:mkd8d2745eb84bf562904d25d78a7b0493e0cb19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.876814    3296 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key ...
	I0429 12:44:17.876814    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key: {Name:mk7bf6bbe7b08ba2b2f94cfa54674c3d6223c5d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:17.877250    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:44:17.878152    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:44:17.878341    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:44:17.878507    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:44:17.878669    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:44:17.878826    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:44:17.878978    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:44:17.888209    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:44:17.888566    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem (1338 bytes)
	W0429 12:44:17.889216    3296 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496_empty.pem, impossibly tiny 0 bytes
	I0429 12:44:17.889216    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0429 12:44:17.889604    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 12:44:17.889823    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 12:44:17.890158    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 12:44:17.890381    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem (1708 bytes)
	I0429 12:44:17.890381    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem -> /usr/share/ca-certificates/8496.pem
	I0429 12:44:17.890381    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /usr/share/ca-certificates/84962.pem
	I0429 12:44:17.890381    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:44:17.892657    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:44:17.937012    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 12:44:17.975119    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:44:18.025670    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 12:44:18.074631    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 12:44:18.125186    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 12:44:18.175688    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:44:18.226917    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:44:18.280971    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem --> /usr/share/ca-certificates/8496.pem (1338 bytes)
	I0429 12:44:18.331289    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /usr/share/ca-certificates/84962.pem (1708 bytes)
	I0429 12:44:18.381170    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:44:18.427231    3296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 12:44:18.475141    3296 ssh_runner.go:195] Run: openssl version
	I0429 12:44:18.484326    3296 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 12:44:18.499276    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84962.pem && ln -fs /usr/share/ca-certificates/84962.pem /etc/ssl/certs/84962.pem"
	I0429 12:44:18.538050    3296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84962.pem
	I0429 12:44:18.546253    3296 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 12:44:18.546253    3296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 12:44:18.560608    3296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84962.pem
	I0429 12:44:18.570893    3296 command_runner.go:130] > 3ec20f2e
	I0429 12:44:18.588308    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84962.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:44:18.624233    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:44:18.663145    3296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:44:18.670597    3296 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:44:18.670597    3296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:44:18.685901    3296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:44:18.695104    3296 command_runner.go:130] > b5213941
	I0429 12:44:18.706505    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:44:18.741866    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8496.pem && ln -fs /usr/share/ca-certificates/8496.pem /etc/ssl/certs/8496.pem"
	I0429 12:44:18.774617    3296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8496.pem
	I0429 12:44:18.781062    3296 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 12:44:18.781228    3296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 12:44:18.796432    3296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8496.pem
	I0429 12:44:18.804880    3296 command_runner.go:130] > 51391683
	I0429 12:44:18.819968    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8496.pem /etc/ssl/certs/51391683.0"
	I0429 12:44:18.854731    3296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:44:18.859928    3296 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:44:18.860911    3296 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:44:18.860911    3296 kubeadm.go:391] StartCluster: {Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:44:18.872252    3296 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 12:44:18.908952    3296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 12:44:18.928831    3296 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0429 12:44:18.929074    3296 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0429 12:44:18.929074    3296 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0429 12:44:18.944379    3296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 12:44:18.974571    3296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 12:44:18.994214    3296 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0429 12:44:18.994214    3296 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0429 12:44:18.994214    3296 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0429 12:44:18.994214    3296 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 12:44:18.994214    3296 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 12:44:18.994214    3296 kubeadm.go:156] found existing configuration files:
	
	I0429 12:44:19.008068    3296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 12:44:19.025289    3296 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 12:44:19.026305    3296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 12:44:19.039591    3296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 12:44:19.081589    3296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 12:44:19.102122    3296 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 12:44:19.102278    3296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 12:44:19.115510    3296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 12:44:19.146531    3296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 12:44:19.164905    3296 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 12:44:19.164905    3296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 12:44:19.181666    3296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 12:44:19.213432    3296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 12:44:19.234056    3296 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 12:44:19.234643    3296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 12:44:19.246709    3296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 12:44:19.265779    3296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 12:44:19.521753    3296 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 12:44:19.521821    3296 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0429 12:44:19.522093    3296 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 12:44:19.522175    3296 command_runner.go:130] > [preflight] Running pre-flight checks
	I0429 12:44:19.707934    3296 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 12:44:19.707934    3296 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 12:44:19.708156    3296 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 12:44:19.708156    3296 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 12:44:19.708156    3296 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 12:44:19.708156    3296 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 12:44:20.023840    3296 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 12:44:20.023840    3296 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 12:44:20.029443    3296 out.go:204]   - Generating certificates and keys ...
	I0429 12:44:20.029588    3296 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0429 12:44:20.029588    3296 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 12:44:20.029757    3296 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0429 12:44:20.029823    3296 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 12:44:20.369033    3296 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 12:44:20.369103    3296 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 12:44:20.476523    3296 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 12:44:20.476523    3296 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0429 12:44:20.776704    3296 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 12:44:20.776760    3296 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0429 12:44:21.061534    3296 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 12:44:21.061650    3296 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0429 12:44:21.304438    3296 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0429 12:44:21.304438    3296 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 12:44:21.304438    3296 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-409200] and IPs [172.26.185.116 127.0.0.1 ::1]
	I0429 12:44:21.304438    3296 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-409200] and IPs [172.26.185.116 127.0.0.1 ::1]
	I0429 12:44:21.896641    3296 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 12:44:21.897606    3296 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0429 12:44:21.897866    3296 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-409200] and IPs [172.26.185.116 127.0.0.1 ::1]
	I0429 12:44:21.898051    3296 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-409200] and IPs [172.26.185.116 127.0.0.1 ::1]
	I0429 12:44:22.003777    3296 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 12:44:22.003777    3296 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 12:44:22.188658    3296 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 12:44:22.188658    3296 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 12:44:22.373946    3296 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 12:44:22.373946    3296 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0429 12:44:22.374390    3296 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 12:44:22.374390    3296 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 12:44:22.494389    3296 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 12:44:22.495356    3296 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 12:44:22.609117    3296 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 12:44:22.609248    3296 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 12:44:22.737208    3296 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 12:44:22.737208    3296 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 12:44:22.999498    3296 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 12:44:22.999498    3296 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 12:44:23.233231    3296 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 12:44:23.233920    3296 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 12:44:23.234997    3296 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 12:44:23.235061    3296 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 12:44:23.242239    3296 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 12:44:23.242239    3296 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 12:44:23.248689    3296 out.go:204]   - Booting up control plane ...
	I0429 12:44:23.248689    3296 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 12:44:23.248689    3296 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 12:44:23.248689    3296 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 12:44:23.248689    3296 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 12:44:23.249337    3296 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 12:44:23.249337    3296 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 12:44:23.289260    3296 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:44:23.289318    3296 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:44:23.293418    3296 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:44:23.293418    3296 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:44:23.293418    3296 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 12:44:23.293418    3296 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 12:44:23.533729    3296 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 12:44:23.533729    3296 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 12:44:23.533729    3296 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:44:23.533729    3296 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:44:24.536142    3296 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002273096s
	I0429 12:44:24.536438    3296 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002273096s
	I0429 12:44:24.536769    3296 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 12:44:24.536769    3296 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 12:44:32.037593    3296 kubeadm.go:309] [api-check] The API server is healthy after 7.502621698s
	I0429 12:44:32.038456    3296 command_runner.go:130] > [api-check] The API server is healthy after 7.502621698s
	I0429 12:44:32.060219    3296 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 12:44:32.060334    3296 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 12:44:32.091007    3296 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 12:44:32.091546    3296 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 12:44:32.144604    3296 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 12:44:32.144604    3296 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0429 12:44:32.145060    3296 command_runner.go:130] > [mark-control-plane] Marking the node multinode-409200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 12:44:32.145060    3296 kubeadm.go:309] [mark-control-plane] Marking the node multinode-409200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 12:44:32.161257    3296 command_runner.go:130] > [bootstrap-token] Using token: yfqpmq.jq2ry4kf0oz9zbyr
	I0429 12:44:32.161257    3296 kubeadm.go:309] [bootstrap-token] Using token: yfqpmq.jq2ry4kf0oz9zbyr
	I0429 12:44:32.164440    3296 out.go:204]   - Configuring RBAC rules ...
	I0429 12:44:32.164650    3296 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 12:44:32.164730    3296 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 12:44:32.173392    3296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 12:44:32.173466    3296 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 12:44:32.192810    3296 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 12:44:32.192895    3296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 12:44:32.198990    3296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 12:44:32.198990    3296 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 12:44:32.207240    3296 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 12:44:32.207347    3296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 12:44:32.220434    3296 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 12:44:32.220434    3296 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 12:44:32.454502    3296 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 12:44:32.454502    3296 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 12:44:32.926433    3296 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 12:44:32.926433    3296 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0429 12:44:33.459427    3296 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 12:44:33.459540    3296 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0429 12:44:33.460905    3296 kubeadm.go:309] 
	I0429 12:44:33.460905    3296 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 12:44:33.460905    3296 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0429 12:44:33.460905    3296 kubeadm.go:309] 
	I0429 12:44:33.460905    3296 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 12:44:33.460905    3296 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0429 12:44:33.460905    3296 kubeadm.go:309] 
	I0429 12:44:33.461661    3296 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 12:44:33.461661    3296 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0429 12:44:33.461841    3296 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 12:44:33.461841    3296 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 12:44:33.461841    3296 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 12:44:33.462057    3296 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 12:44:33.462057    3296 kubeadm.go:309] 
	I0429 12:44:33.462176    3296 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0429 12:44:33.462176    3296 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 12:44:33.462176    3296 kubeadm.go:309] 
	I0429 12:44:33.462176    3296 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 12:44:33.462176    3296 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 12:44:33.462176    3296 kubeadm.go:309] 
	I0429 12:44:33.462176    3296 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0429 12:44:33.462176    3296 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 12:44:33.462721    3296 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 12:44:33.462721    3296 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 12:44:33.462721    3296 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 12:44:33.462900    3296 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 12:44:33.462900    3296 kubeadm.go:309] 
	I0429 12:44:33.463040    3296 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0429 12:44:33.463040    3296 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 12:44:33.463040    3296 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 12:44:33.463040    3296 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0429 12:44:33.463040    3296 kubeadm.go:309] 
	I0429 12:44:33.463040    3296 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token yfqpmq.jq2ry4kf0oz9zbyr \
	I0429 12:44:33.463040    3296 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yfqpmq.jq2ry4kf0oz9zbyr \
	I0429 12:44:33.463831    3296 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a \
	I0429 12:44:33.463927    3296 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a \
	I0429 12:44:33.464158    3296 command_runner.go:130] > 	--control-plane 
	I0429 12:44:33.464222    3296 kubeadm.go:309] 	--control-plane 
	I0429 12:44:33.464222    3296 kubeadm.go:309] 
	I0429 12:44:33.464222    3296 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0429 12:44:33.464364    3296 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 12:44:33.464438    3296 kubeadm.go:309] 
	I0429 12:44:33.464625    3296 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yfqpmq.jq2ry4kf0oz9zbyr \
	I0429 12:44:33.464625    3296 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token yfqpmq.jq2ry4kf0oz9zbyr \
	I0429 12:44:33.464755    3296 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a 
	I0429 12:44:33.464832    3296 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a 
	I0429 12:44:33.465037    3296 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:44:33.465037    3296 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:44:33.465099    3296 cni.go:84] Creating CNI manager for ""
	I0429 12:44:33.465167    3296 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:44:33.468388    3296 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 12:44:33.482496    3296 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 12:44:33.490673    3296 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0429 12:44:33.490673    3296 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0429 12:44:33.491156    3296 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0429 12:44:33.491156    3296 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 12:44:33.491156    3296 command_runner.go:130] > Access: 2024-04-29 12:42:36.251857500 +0000
	I0429 12:44:33.491156    3296 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0429 12:44:33.491225    3296 command_runner.go:130] > Change: 2024-04-29 12:42:28.230000000 +0000
	I0429 12:44:33.491225    3296 command_runner.go:130] >  Birth: -
	I0429 12:44:33.491369    3296 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 12:44:33.491369    3296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 12:44:33.549690    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 12:44:34.258026    3296 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0429 12:44:34.258122    3296 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0429 12:44:34.258122    3296 command_runner.go:130] > serviceaccount/kindnet created
	I0429 12:44:34.258122    3296 command_runner.go:130] > daemonset.apps/kindnet created
	I0429 12:44:34.258191    3296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 12:44:34.274018    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:34.274904    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-409200 minikube.k8s.io/updated_at=2024_04_29T12_44_34_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d minikube.k8s.io/name=multinode-409200 minikube.k8s.io/primary=true
	I0429 12:44:34.289779    3296 command_runner.go:130] > -16
	I0429 12:44:34.290424    3296 ops.go:34] apiserver oom_adj: -16
	I0429 12:44:34.463193    3296 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0429 12:44:34.463193    3296 command_runner.go:130] > node/multinode-409200 labeled
	I0429 12:44:34.478175    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:34.590305    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:34.977286    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:35.090759    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:35.480369    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:35.602599    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:35.990560    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:36.099579    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:36.479582    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:36.601445    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:36.981693    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:37.090673    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:37.484965    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:37.603109    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:37.982196    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:38.096571    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:38.484642    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:38.609982    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:38.985308    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:39.096468    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:39.489292    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:39.606454    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:39.992451    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:40.124908    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:40.481016    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:40.598978    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:40.981506    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:41.094480    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:41.487459    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:41.607630    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:41.983291    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:42.114225    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:42.491967    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:42.636510    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:42.984042    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:43.144920    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:43.485736    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:43.607086    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:43.987859    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:44.098597    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:44.479282    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:44.598110    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:44.982334    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:45.112645    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:45.487733    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:45.604298    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:45.993246    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:46.110503    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:46.489099    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:46.598695    3296 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0429 12:44:46.992710    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:44:47.118922    3296 command_runner.go:130] > NAME      SECRETS   AGE
	I0429 12:44:47.118922    3296 command_runner.go:130] > default   0         1s
	I0429 12:44:47.118922    3296 kubeadm.go:1107] duration metric: took 12.8606308s to wait for elevateKubeSystemPrivileges
	W0429 12:44:47.118922    3296 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 12:44:47.118922    3296 kubeadm.go:393] duration metric: took 28.257791s to StartCluster
	I0429 12:44:47.118922    3296 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:47.119947    3296 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 12:44:47.120913    3296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:44:47.123001    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 12:44:47.123001    3296 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 12:44:47.123001    3296 addons.go:69] Setting storage-provisioner=true in profile "multinode-409200"
	I0429 12:44:47.123001    3296 addons.go:234] Setting addon storage-provisioner=true in "multinode-409200"
	I0429 12:44:47.123001    3296 start.go:234] Will wait 6m0s for node &{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:44:47.123001    3296 host.go:66] Checking if "multinode-409200" exists ...
	I0429 12:44:47.123001    3296 addons.go:69] Setting default-storageclass=true in profile "multinode-409200"
	I0429 12:44:47.123001    3296 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:44:47.128930    3296 out.go:177] * Verifying Kubernetes components...
	I0429 12:44:47.123918    3296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-409200"
	I0429 12:44:47.124922    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:44:47.129918    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:44:47.147920    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:44:47.387572    3296 command_runner.go:130] > apiVersion: v1
	I0429 12:44:47.387572    3296 command_runner.go:130] > data:
	I0429 12:44:47.387572    3296 command_runner.go:130] >   Corefile: |
	I0429 12:44:47.387572    3296 command_runner.go:130] >     .:53 {
	I0429 12:44:47.387572    3296 command_runner.go:130] >         errors
	I0429 12:44:47.387572    3296 command_runner.go:130] >         health {
	I0429 12:44:47.387572    3296 command_runner.go:130] >            lameduck 5s
	I0429 12:44:47.387572    3296 command_runner.go:130] >         }
	I0429 12:44:47.387572    3296 command_runner.go:130] >         ready
	I0429 12:44:47.388041    3296 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0429 12:44:47.388041    3296 command_runner.go:130] >            pods insecure
	I0429 12:44:47.388041    3296 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0429 12:44:47.388041    3296 command_runner.go:130] >            ttl 30
	I0429 12:44:47.388041    3296 command_runner.go:130] >         }
	I0429 12:44:47.388128    3296 command_runner.go:130] >         prometheus :9153
	I0429 12:44:47.388128    3296 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0429 12:44:47.388171    3296 command_runner.go:130] >            max_concurrent 1000
	I0429 12:44:47.388171    3296 command_runner.go:130] >         }
	I0429 12:44:47.388171    3296 command_runner.go:130] >         cache 30
	I0429 12:44:47.388220    3296 command_runner.go:130] >         loop
	I0429 12:44:47.388220    3296 command_runner.go:130] >         reload
	I0429 12:44:47.388220    3296 command_runner.go:130] >         loadbalance
	I0429 12:44:47.388220    3296 command_runner.go:130] >     }
	I0429 12:44:47.388220    3296 command_runner.go:130] > kind: ConfigMap
	I0429 12:44:47.388329    3296 command_runner.go:130] > metadata:
	I0429 12:44:47.388329    3296 command_runner.go:130] >   creationTimestamp: "2024-04-29T12:44:32Z"
	I0429 12:44:47.388329    3296 command_runner.go:130] >   name: coredns
	I0429 12:44:47.388329    3296 command_runner.go:130] >   namespace: kube-system
	I0429 12:44:47.388329    3296 command_runner.go:130] >   resourceVersion: "227"
	I0429 12:44:47.388329    3296 command_runner.go:130] >   uid: 11d612e9-bdbd-4d3c-bda3-1675a32714c4
	I0429 12:44:47.391023    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.26.176.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 12:44:47.585846    3296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:44:47.981142    3296 command_runner.go:130] > configmap/coredns replaced
	I0429 12:44:47.981263    3296 start.go:946] {"host.minikube.internal": 172.26.176.1} host record injected into CoreDNS's ConfigMap
	I0429 12:44:47.982700    3296 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 12:44:47.984024    3296 kapi.go:59] client config for multinode-409200: &rest.Config{Host:"https://172.26.185.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:44:47.984268    3296 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 12:44:47.985537    3296 kapi.go:59] client config for multinode-409200: &rest.Config{Host:"https://172.26.185.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:44:47.986109    3296 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 12:44:47.986109    3296 node_ready.go:35] waiting up to 6m0s for node "multinode-409200" to be "Ready" ...
	I0429 12:44:47.986777    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:47.986861    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:47.986861    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:47.986941    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:47.987125    3296 round_trippers.go:463] GET https://172.26.185.116:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 12:44:47.987125    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:47.987125    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:47.987125    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:48.027978    3296 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0429 12:44:48.028758    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:48.028758    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:48 GMT
	I0429 12:44:48.028758    3296 round_trippers.go:580]     Audit-Id: f314a025-2e4e-4940-8cd7-8ecee51f4571
	I0429 12:44:48.028758    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:48.028758    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:48.028758    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:48.028758    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:48.029450    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:48.030717    3296 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0429 12:44:48.031252    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:48.031252    3296 round_trippers.go:580]     Content-Length: 291
	I0429 12:44:48.031339    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:48 GMT
	I0429 12:44:48.031377    3296 round_trippers.go:580]     Audit-Id: f5c59814-cf4b-4333-b29a-eb0d79883cc3
	I0429 12:44:48.031377    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:48.031377    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:48.031377    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:48.031377    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:48.031441    3296 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c17d232d-8e7d-4693-9199-8cabf54e5d48","resourceVersion":"357","creationTimestamp":"2024-04-29T12:44:32Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 12:44:48.032296    3296 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c17d232d-8e7d-4693-9199-8cabf54e5d48","resourceVersion":"357","creationTimestamp":"2024-04-29T12:44:32Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 12:44:48.032426    3296 round_trippers.go:463] PUT https://172.26.185.116:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 12:44:48.032487    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:48.032487    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:48.032487    3296 round_trippers.go:473]     Content-Type: application/json
	I0429 12:44:48.032487    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:48.063859    3296 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0429 12:44:48.063859    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:48.063859    3296 round_trippers.go:580]     Content-Length: 291
	I0429 12:44:48.063859    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:48 GMT
	I0429 12:44:48.063859    3296 round_trippers.go:580]     Audit-Id: 4609687e-4e83-49b1-961a-97562f2387dc
	I0429 12:44:48.063859    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:48.063859    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:48.063859    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:48.063859    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:48.063859    3296 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c17d232d-8e7d-4693-9199-8cabf54e5d48","resourceVersion":"359","creationTimestamp":"2024-04-29T12:44:32Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0429 12:44:48.497157    3296 round_trippers.go:463] GET https://172.26.185.116:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0429 12:44:48.497157    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:48.497157    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:48.497157    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:48.497157    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:48.497157    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:48.497157    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:48.497157    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:48.507102    3296 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 12:44:48.507369    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:48.507369    3296 round_trippers.go:580]     Audit-Id: e6440a59-e8b8-4918-8657-6f67c72b256e
	I0429 12:44:48.507369    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:48.507369    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:48.507369    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:48.507369    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:48.507369    3296 round_trippers.go:580]     Content-Length: 291
	I0429 12:44:48.507369    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:48 GMT
	I0429 12:44:48.507457    3296 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c17d232d-8e7d-4693-9199-8cabf54e5d48","resourceVersion":"369","creationTimestamp":"2024-04-29T12:44:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0429 12:44:48.507595    3296 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-409200" context rescaled to 1 replicas
	I0429 12:44:48.518111    3296 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0429 12:44:48.518111    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:48.518111    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:48 GMT
	I0429 12:44:48.518111    3296 round_trippers.go:580]     Audit-Id: 5431e861-130c-4816-bedd-bbfa55282ccf
	I0429 12:44:48.518111    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:48.518111    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:48.518111    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:48.518111    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:48.518111    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:48.987654    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:48.987654    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:48.987654    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:48.987753    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:48.994725    3296 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:44:48.994835    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:48.994943    3296 round_trippers.go:580]     Audit-Id: 4beb9ef9-2883-408d-98a1-5f73aac11dc7
	I0429 12:44:48.994943    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:48.994943    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:48.994943    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:48.994943    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:48.994943    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:48 GMT
	I0429 12:44:48.995220    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:49.429025    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:44:49.429931    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:49.430156    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:44:49.430156    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:49.434434    3296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 12:44:49.431380    3296 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 12:44:49.437224    3296 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:44:49.437224    3296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 12:44:49.437224    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:44:49.437224    3296 kapi.go:59] client config for multinode-409200: &rest.Config{Host:"https://172.26.185.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:44:49.438210    3296 addons.go:234] Setting addon default-storageclass=true in "multinode-409200"
	I0429 12:44:49.438210    3296 host.go:66] Checking if "multinode-409200" exists ...
	I0429 12:44:49.439218    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:44:49.493646    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:49.493711    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:49.493711    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:49.493711    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:49.497211    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:49.497952    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:49.497952    3296 round_trippers.go:580]     Audit-Id: 17c3ae84-c79d-4230-aca9-cd037a4c1fed
	I0429 12:44:49.497952    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:49.497952    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:49.498047    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:49.498047    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:49.498047    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:49 GMT
	I0429 12:44:49.499254    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:49.987901    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:49.988181    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:49.988181    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:49.988181    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:49.991502    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:49.991502    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:49.991502    3296 round_trippers.go:580]     Audit-Id: f90ac51b-8b26-4bd0-8e1a-4de276b4ecc4
	I0429 12:44:49.991502    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:49.992231    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:49.992231    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:49.992231    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:49.992231    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:49 GMT
	I0429 12:44:49.993343    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:49.993502    3296 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 12:44:50.497126    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:50.497366    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:50.497366    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:50.497462    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:50.502633    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:44:50.502633    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:50.502633    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:50.502633    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:50.502633    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:50.502633    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:50.502633    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:50 GMT
	I0429 12:44:50.502633    3296 round_trippers.go:580]     Audit-Id: 6d838465-5f59-4111-b729-d239b60ad1e5
	I0429 12:44:50.503643    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:50.989937    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:50.990001    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:50.990001    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:50.990001    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:50.993584    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:50.993584    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:50.993584    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:50.993584    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:50.993584    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:50.993584    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:50.993584    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:50 GMT
	I0429 12:44:50.993584    3296 round_trippers.go:580]     Audit-Id: 6ea1b049-c95f-4a57-b156-184f4e7a532d
	I0429 12:44:50.993584    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:51.499582    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:51.499703    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:51.499703    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:51.499703    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:51.504097    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:44:51.504255    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:51.504255    3296 round_trippers.go:580]     Audit-Id: 129b3891-9b73-4491-96d4-b549272136b0
	I0429 12:44:51.504255    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:51.504255    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:51.504255    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:51.504332    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:51.504332    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:51 GMT
	I0429 12:44:51.505870    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:51.725276    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:44:51.725276    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:51.725276    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:44:51.812345    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:44:51.813143    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:51.813323    3296 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 12:44:51.813345    3296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 12:44:51.813446    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:44:51.990003    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:51.990243    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:51.990243    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:51.990243    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:51.994853    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:51.994961    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:51.994961    3296 round_trippers.go:580]     Audit-Id: fd025c19-6e78-4182-91a3-cabf4cd9eef4
	I0429 12:44:51.994961    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:51.994961    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:51.995033    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:51.995033    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:51.995033    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:51 GMT
	I0429 12:44:51.995280    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:51.995831    3296 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 12:44:52.495839    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:52.495906    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:52.495906    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:52.495972    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:52.499456    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:52.500476    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:52.500476    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:52.500476    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:52.500476    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:52.500476    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:52.500476    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:52 GMT
	I0429 12:44:52.500476    3296 round_trippers.go:580]     Audit-Id: 9580c8ca-591c-47ad-be84-8fee1ba5737e
	I0429 12:44:52.500476    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:52.988440    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:52.988440    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:52.988440    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:52.988440    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:52.992068    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:52.992068    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:52.992068    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:52.992346    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:52.992346    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:52.992346    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:52 GMT
	I0429 12:44:52.992346    3296 round_trippers.go:580]     Audit-Id: aea305e3-0942-47d6-b689-14b4a7afbb67
	I0429 12:44:52.992346    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:52.993054    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:53.492686    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:53.492686    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:53.492686    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:53.492686    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:53.497503    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:44:53.497879    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:53.497940    3296 round_trippers.go:580]     Audit-Id: 1121d8c2-9418-49fb-9d73-ebb54f0d7b5e
	I0429 12:44:53.497940    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:53.497940    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:53.497940    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:53.497940    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:53.497940    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:53 GMT
	I0429 12:44:53.498274    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:53.987193    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:53.987252    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:53.987252    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:53.987252    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:53.990798    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:53.990798    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:53.990798    3296 round_trippers.go:580]     Audit-Id: b800aac0-4546-44f3-adc5-a0d7d3b96135
	I0429 12:44:53.990798    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:53.990798    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:53.990798    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:53.990798    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:53.990798    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:53 GMT
	I0429 12:44:53.990798    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:54.053780    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:44:54.053780    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:54.053780    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:44:54.393125    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:44:54.393125    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:54.394344    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:44:54.494172    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:54.494172    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:54.494172    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:54.494172    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:54.498184    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:44:54.498184    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:54.498184    3296 round_trippers.go:580]     Audit-Id: 0ce7ac22-e96e-4cc8-a586-c8fb2a5464cf
	I0429 12:44:54.498184    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:54.498184    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:54.498184    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:54.498184    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:54.498184    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:54 GMT
	I0429 12:44:54.498184    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:54.499180    3296 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 12:44:54.542173    3296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:44:54.987695    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:54.987695    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:54.987695    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:54.987695    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:55.121049    3296 round_trippers.go:574] Response Status: 200 OK in 133 milliseconds
	I0429 12:44:55.121049    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:55.121154    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:55.121154    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:55.121154    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:55.121154    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:55.121154    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:55 GMT
	I0429 12:44:55.121154    3296 round_trippers.go:580]     Audit-Id: d2cf4358-518d-4e2b-b3b4-d9d06fb318d4
	I0429 12:44:55.121498    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:55.283741    3296 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0429 12:44:55.283834    3296 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0429 12:44:55.283897    3296 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 12:44:55.283897    3296 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0429 12:44:55.283897    3296 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0429 12:44:55.283897    3296 command_runner.go:130] > pod/storage-provisioner created
	I0429 12:44:55.498208    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:55.498269    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:55.498269    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:55.498269    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:55.501876    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:55.501876    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:55.501954    3296 round_trippers.go:580]     Audit-Id: 12ca5c45-8941-45dc-84a9-4159fc888677
	I0429 12:44:55.501954    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:55.501954    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:55.501954    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:55.501954    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:55.501954    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:55 GMT
	I0429 12:44:55.502158    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:55.992048    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:55.992048    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:55.992048    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:55.992048    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:55.994904    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:44:55.994904    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:55.995929    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:55.995929    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:55.995929    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:55 GMT
	I0429 12:44:55.996002    3296 round_trippers.go:580]     Audit-Id: 6ec09ca9-3be4-4d9b-be97-56d8f9a7a96d
	I0429 12:44:55.996002    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:55.996002    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:55.996284    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:56.499993    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:56.500060    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:56.500060    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:56.500060    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:56.503881    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:56.503881    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:56.503881    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:56.503881    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:56.504419    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:56 GMT
	I0429 12:44:56.504419    3296 round_trippers.go:580]     Audit-Id: 1e1a4dde-85ad-4c09-9ab3-a55c1ea5bb43
	I0429 12:44:56.504419    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:56.504419    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:56.504485    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:56.505286    3296 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 12:44:56.639576    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:44:56.640629    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:44:56.640629    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:44:56.766412    3296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:44:56.941419    3296 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0429 12:44:56.943064    3296 round_trippers.go:463] GET https://172.26.185.116:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 12:44:56.943160    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:56.943160    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:56.943160    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:56.946411    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:56.946476    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:56.946476    3296 round_trippers.go:580]     Audit-Id: 433f1ba2-60c3-44e5-a79c-51b09710afa1
	I0429 12:44:56.946476    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:56.946476    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:56.946476    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:56.946476    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:56.946550    3296 round_trippers.go:580]     Content-Length: 1273
	I0429 12:44:56.946550    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:56 GMT
	I0429 12:44:56.946550    3296 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"397"},"items":[{"metadata":{"name":"standard","uid":"5f5d59b0-3fe5-4a95-8088-dbd2aae085b6","resourceVersion":"397","creationTimestamp":"2024-04-29T12:44:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T12:44:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0429 12:44:56.947079    3296 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5f5d59b0-3fe5-4a95-8088-dbd2aae085b6","resourceVersion":"397","creationTimestamp":"2024-04-29T12:44:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T12:44:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 12:44:56.947239    3296 round_trippers.go:463] PUT https://172.26.185.116:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 12:44:56.947239    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:56.947239    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:56.947239    3296 round_trippers.go:473]     Content-Type: application/json
	I0429 12:44:56.947239    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:56.950933    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:56.950933    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:56.950933    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:56.950933    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:56.950933    3296 round_trippers.go:580]     Content-Length: 1220
	I0429 12:44:56.950933    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:56 GMT
	I0429 12:44:56.951109    3296 round_trippers.go:580]     Audit-Id: 806f726f-3d36-4312-b0f9-6c2058e71382
	I0429 12:44:56.951109    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:56.951109    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:56.951285    3296 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5f5d59b0-3fe5-4a95-8088-dbd2aae085b6","resourceVersion":"397","creationTimestamp":"2024-04-29T12:44:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-29T12:44:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0429 12:44:56.954419    3296 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 12:44:56.957679    3296 addons.go:505] duration metric: took 9.8346009s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 12:44:56.986568    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:56.986568    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:56.986682    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:56.986682    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:57.006275    3296 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0429 12:44:57.007061    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:57.007061    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:57.007061    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:57 GMT
	I0429 12:44:57.007061    3296 round_trippers.go:580]     Audit-Id: 6216f9d2-42b3-4511-903b-d7b986c00ed3
	I0429 12:44:57.007061    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:57.007061    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:57.007061    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:57.007061    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:57.487285    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:57.487613    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:57.487613    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:57.487613    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:57.494601    3296 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:44:57.494601    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:57.494601    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:57.494601    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:57.494601    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:57 GMT
	I0429 12:44:57.494601    3296 round_trippers.go:580]     Audit-Id: 2e534af6-cd9e-43af-af28-8c1bcc1f9efa
	I0429 12:44:57.494601    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:57.494601    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:57.495335    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:57.987102    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:57.987102    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:57.987189    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:57.987189    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:57.991241    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:44:57.991314    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:57.991314    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:57 GMT
	I0429 12:44:57.991428    3296 round_trippers.go:580]     Audit-Id: bf0ecebf-ceb3-4179-abb5-b96d55120d71
	I0429 12:44:57.991428    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:57.991428    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:57.991428    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:57.991428    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:57.992189    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:58.486877    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:58.486964    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:58.486964    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:58.486964    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:58.490349    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:44:58.490349    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:58.490349    3296 round_trippers.go:580]     Audit-Id: 6393d1cf-7a26-4f32-9232-cae6ce627786
	I0429 12:44:58.490662    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:58.490662    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:58.490662    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:58.490662    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:58.490662    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:58 GMT
	I0429 12:44:58.490835    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:58.987199    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:58.987271    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:58.987271    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:58.987271    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:58.991919    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:44:58.991919    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:58.991919    3296 round_trippers.go:580]     Audit-Id: ad4cb3b8-242d-41ca-ad8a-a7d767a7bc16
	I0429 12:44:58.992276    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:58.992276    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:58.992276    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:58.992276    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:58.992276    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:58 GMT
	I0429 12:44:58.992644    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:58.993258    3296 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 12:44:59.499083    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:59.499083    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:59.499083    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:59.499173    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:44:59.503490    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:44:59.503797    3296 round_trippers.go:577] Response Headers:
	I0429 12:44:59.503797    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:44:59.503797    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:44:59.503797    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:44:59.503797    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:44:59.503797    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:44:59 GMT
	I0429 12:44:59.503797    3296 round_trippers.go:580]     Audit-Id: 7d006809-6f37-417c-8d2d-ceabc88f5c0f
	I0429 12:44:59.504601    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:44:59.999824    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:44:59.999824    3296 round_trippers.go:469] Request Headers:
	I0429 12:44:59.999824    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:44:59.999824    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:00.003417    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:00.003417    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:00.003417    3296 round_trippers.go:580]     Audit-Id: 7447b2af-64d8-4510-88a0-932e5399c7e8
	I0429 12:45:00.003417    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:00.003666    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:00.003666    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:00.003666    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:00.003666    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:00 GMT
	I0429 12:45:00.004352    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:45:00.488108    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:00.488108    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:00.488108    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:00.488108    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:00.493711    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:45:00.494629    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:00.494629    3296 round_trippers.go:580]     Audit-Id: 850ddbf7-5fb9-4d9c-ab44-d2b06881b8ad
	I0429 12:45:00.494629    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:00.494629    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:00.494629    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:00.494629    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:00.494716    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:00 GMT
	I0429 12:45:00.495291    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:45:00.988408    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:00.988408    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:00.988408    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:00.988408    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:00.998088    3296 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 12:45:00.998088    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:00.998088    3296 round_trippers.go:580]     Audit-Id: fae8b223-43af-47e7-a8e1-df95a944f347
	I0429 12:45:00.998088    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:00.998088    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:00.998088    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:00.998088    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:00.998088    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:01 GMT
	I0429 12:45:00.998854    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:45:00.999491    3296 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 12:45:01.487801    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:01.487892    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:01.487892    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:01.487892    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:01.491255    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:01.491255    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:01.491255    3296 round_trippers.go:580]     Audit-Id: b1561a53-6d82-4792-a0cb-a54e5b0add20
	I0429 12:45:01.491255    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:01.491255    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:01.491255    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:01.491255    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:01.491255    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:01 GMT
	I0429 12:45:01.492595    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"326","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0429 12:45:01.992815    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:01.992815    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:01.992921    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:01.992921    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:01.998238    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:45:01.998831    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:01.998831    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:01.998831    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:01.998831    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:01.998831    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:01.998831    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:02 GMT
	I0429 12:45:01.998831    3296 round_trippers.go:580]     Audit-Id: 48ad1f42-c9c3-4e50-a0f9-f7f8f769e6ae
	I0429 12:45:01.998977    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:01.999717    3296 node_ready.go:49] node "multinode-409200" has status "Ready":"True"
	I0429 12:45:01.999717    3296 node_ready.go:38] duration metric: took 14.0134996s for node "multinode-409200" to be "Ready" ...
	I0429 12:45:01.999717    3296 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:45:01.999951    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods
	I0429 12:45:01.999951    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:01.999951    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:01.999951    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:02.007629    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:45:02.007629    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:02.007629    3296 round_trippers.go:580]     Audit-Id: de4530cf-d6fa-4642-9986-1130033d04d0
	I0429 12:45:02.007629    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:02.007629    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:02.007629    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:02.007629    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:02.007629    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:02 GMT
	I0429 12:45:02.008620    3296 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"406","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0429 12:45:02.014626    3296 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:02.014626    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 12:45:02.014626    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:02.014626    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:02.014626    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:02.018623    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:02.019129    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:02.019129    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:02.019129    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:02.019129    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:02.019129    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:02 GMT
	I0429 12:45:02.019129    3296 round_trippers.go:580]     Audit-Id: c932abd8-5efc-4791-ba87-272407a6105e
	I0429 12:45:02.019129    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:02.019129    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"406","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0429 12:45:02.019760    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:02.019760    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:02.019760    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:02.019760    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:02.022335    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:45:02.022335    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:02.022690    3296 round_trippers.go:580]     Audit-Id: 005cbf7b-dcb5-4c3c-a918-a06dc9d91ff3
	I0429 12:45:02.022690    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:02.022690    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:02.022690    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:02.022690    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:02.022751    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:02 GMT
	I0429 12:45:02.022751    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:02.517658    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 12:45:02.517720    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:02.517720    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:02.517720    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:02.521377    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:02.521377    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:02.521377    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:02.521377    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:02 GMT
	I0429 12:45:02.521910    3296 round_trippers.go:580]     Audit-Id: 0ba0b24e-578a-44e4-aacf-85bf2cbf2f35
	I0429 12:45:02.521910    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:02.521910    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:02.521910    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:02.522201    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"406","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0429 12:45:02.522907    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:02.522979    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:02.522979    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:02.522979    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:02.526376    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:02.527262    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:02.527262    3296 round_trippers.go:580]     Audit-Id: 50407530-6988-4c95-821a-2707da3eebd0
	I0429 12:45:02.527262    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:02.527262    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:02.527262    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:02.527262    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:02.527262    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:02 GMT
	I0429 12:45:02.528375    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.022363    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 12:45:03.022363    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.022363    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.022363    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.027925    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:45:03.027925    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.027925    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.027925    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.027925    3296 round_trippers.go:580]     Audit-Id: ffc9b74e-abb8-4b52-858e-4dc1eebddc20
	I0429 12:45:03.027925    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.027925    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.027925    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.028687    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"406","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0429 12:45:03.029435    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:03.029435    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.029435    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.029435    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.036065    3296 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:45:03.036065    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.036065    3296 round_trippers.go:580]     Audit-Id: d91f1d54-d40b-493d-8b0e-f03031786d88
	I0429 12:45:03.036065    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.036065    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.036065    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.036065    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.036065    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.037065    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.529021    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 12:45:03.529021    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.529021    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.529021    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.534002    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:45:03.534002    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.534002    3296 round_trippers.go:580]     Audit-Id: b7814e9b-660c-42a5-b5fe-46f0ed38acec
	I0429 12:45:03.534002    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.534002    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.534002    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.534233    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.534233    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.534697    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"418","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0429 12:45:03.535742    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:03.535804    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.535804    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.535804    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.548932    3296 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 12:45:03.548932    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.548932    3296 round_trippers.go:580]     Audit-Id: 9ebd281a-5935-4afb-8543-d9008eba601b
	I0429 12:45:03.548932    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.548932    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.548932    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.548932    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.548932    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.549963    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.549963    3296 pod_ready.go:92] pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace has status "Ready":"True"
	I0429 12:45:03.549963    3296 pod_ready.go:81] duration metric: took 1.5353247s for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.549963    3296 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.549963    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-409200
	I0429 12:45:03.549963    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.549963    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.549963    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.569942    3296 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0429 12:45:03.569942    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.569942    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.569942    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.570399    3296 round_trippers.go:580]     Audit-Id: d61b0e30-7689-468f-b6cc-eb51e5e95a41
	I0429 12:45:03.570399    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.570399    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.570399    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.570633    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-409200","namespace":"kube-system","uid":"d181e36d-2901-4660-a441-6f6b5f3d6c5f","resourceVersion":"381","creationTimestamp":"2024-04-29T12:44:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.185.116:2379","kubernetes.io/config.hash":"c66d644ea477a94b97c6ebe1092303ff","kubernetes.io/config.mirror":"c66d644ea477a94b97c6ebe1092303ff","kubernetes.io/config.seen":"2024-04-29T12:44:32.885743739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0429 12:45:03.571180    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:03.571180    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.571180    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.571180    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.574016    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:45:03.574016    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.574016    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.574016    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.574016    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.574016    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.574412    3296 round_trippers.go:580]     Audit-Id: 7b4c4ef3-470b-4d30-a9ca-f03b2d6eeff1
	I0429 12:45:03.574412    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.574576    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.575177    3296 pod_ready.go:92] pod "etcd-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:45:03.575256    3296 pod_ready.go:81] duration metric: took 25.2136ms for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.575256    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.575446    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-409200
	I0429 12:45:03.575446    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.575501    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.575501    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.578015    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:45:03.579015    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.579015    3296 round_trippers.go:580]     Audit-Id: 3eaf2df8-6bf2-489f-88bc-29f366d94d6f
	I0429 12:45:03.579015    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.579015    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.579015    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.579015    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.579015    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.579308    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-409200","namespace":"kube-system","uid":"da427161-547d-4e8d-a545-8b243ce10f12","resourceVersion":"380","creationTimestamp":"2024-04-29T12:44:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.185.116:8443","kubernetes.io/config.hash":"fab3ac6a5694131422285e941b90103f","kubernetes.io/config.mirror":"fab3ac6a5694131422285e941b90103f","kubernetes.io/config.seen":"2024-04-29T12:44:24.392874586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0429 12:45:03.579984    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:03.579984    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.579984    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.580048    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.581541    3296 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 12:45:03.581541    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.581541    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.581541    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.581541    3296 round_trippers.go:580]     Audit-Id: fd8ff95d-ae10-44fc-a86c-dcbc1e1e497c
	I0429 12:45:03.581541    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.582465    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.582465    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.582986    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.584577    3296 pod_ready.go:92] pod "kube-apiserver-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:45:03.584618    3296 pod_ready.go:81] duration metric: took 9.3622ms for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.584618    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.584800    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-409200
	I0429 12:45:03.584800    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.584800    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.584800    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.592420    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:45:03.592420    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.592420    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.592420    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.592420    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.592420    3296 round_trippers.go:580]     Audit-Id: cdfdc759-a918-4f6b-8211-ea8f62b39f8b
	I0429 12:45:03.592420    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.592420    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.593419    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-409200","namespace":"kube-system","uid":"bc75101f-63f2-4b41-a912-4d015c4fd4aa","resourceVersion":"382","creationTimestamp":"2024-04-29T12:44:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.mirror":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.seen":"2024-04-29T12:44:32.885750739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0429 12:45:03.593419    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:03.593419    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.593419    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.593419    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.596453    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:03.596453    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.596453    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.596453    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.596453    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.596453    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.596453    3296 round_trippers.go:580]     Audit-Id: bc0b5cb2-d269-41fb-9405-ceb55c938ed5
	I0429 12:45:03.596453    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.596453    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.596453    3296 pod_ready.go:92] pod "kube-controller-manager-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:45:03.596453    3296 pod_ready.go:81] duration metric: took 11.8345ms for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.596453    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.596453    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 12:45:03.596453    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.596453    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.596453    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.600415    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:03.600415    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.600415    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.600415    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.600415    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.600415    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.600415    3296 round_trippers.go:580]     Audit-Id: e08be7e5-51a1-4fb2-b260-4fd14e037e01
	I0429 12:45:03.600415    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.600415    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g2jp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"d2c926f8-0701-483c-84ae-295e7bb08fc9","resourceVersion":"375","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0429 12:45:03.600415    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:03.600415    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.600415    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.600415    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.604425    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:45:03.604425    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.604425    3296 round_trippers.go:580]     Audit-Id: 9d692b8f-80ae-41bb-a404-3c84c9d38af0
	I0429 12:45:03.604425    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.604425    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.604425    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.604425    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.604425    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.604425    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"401","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0429 12:45:03.604425    3296 pod_ready.go:92] pod "kube-proxy-g2jp8" in "kube-system" namespace has status "Ready":"True"
	I0429 12:45:03.604425    3296 pod_ready.go:81] duration metric: took 7.9727ms for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.604425    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:03.797053    3296 request.go:629] Waited for 192.3431ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 12:45:03.797345    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 12:45:03.797345    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:03.797345    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:03.797345    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:03.801930    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:45:03.802003    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:03.802003    3296 round_trippers.go:580]     Audit-Id: e08be03c-41b3-4327-b6ef-628d7a103e75
	I0429 12:45:03.802003    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:03.802003    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:03.802003    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:03.802003    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:03.802068    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:03 GMT
	I0429 12:45:03.802122    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-409200","namespace":"kube-system","uid":"6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266","resourceVersion":"379","creationTimestamp":"2024-04-29T12:44:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.mirror":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.seen":"2024-04-29T12:44:24.392867685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0429 12:45:04.001132    3296 request.go:629] Waited for 197.8292ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:04.001300    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:45:04.001300    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:04.001300    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:04.001300    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:04.005356    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:45:04.005356    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:04.005356    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:04.005356    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:04.005356    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:04.005356    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:04.005356    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:04 GMT
	I0429 12:45:04.005356    3296 round_trippers.go:580]     Audit-Id: dbd289b4-c74d-48e3-9263-2cb4a6a20a89
	I0429 12:45:04.005938    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:45:04.006601    3296 pod_ready.go:92] pod "kube-scheduler-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:45:04.006677    3296 pod_ready.go:81] duration metric: took 402.2485ms for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:45:04.006677    3296 pod_ready.go:38] duration metric: took 2.0069438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:45:04.006751    3296 api_server.go:52] waiting for apiserver process to appear ...
	I0429 12:45:04.020619    3296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:45:04.052599    3296 command_runner.go:130] > 2065
	I0429 12:45:04.053460    3296 api_server.go:72] duration metric: took 16.9303268s to wait for apiserver process to appear ...
	I0429 12:45:04.053544    3296 api_server.go:88] waiting for apiserver healthz status ...
	I0429 12:45:04.053619    3296 api_server.go:253] Checking apiserver healthz at https://172.26.185.116:8443/healthz ...
	I0429 12:45:04.064712    3296 api_server.go:279] https://172.26.185.116:8443/healthz returned 200:
	ok
	I0429 12:45:04.065147    3296 round_trippers.go:463] GET https://172.26.185.116:8443/version
	I0429 12:45:04.065147    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:04.065147    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:04.065147    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:04.066701    3296 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 12:45:04.066701    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:04.066701    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:04.066701    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:04.067336    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:04.067336    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:04.067336    3296 round_trippers.go:580]     Content-Length: 263
	I0429 12:45:04.067336    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:04 GMT
	I0429 12:45:04.067336    3296 round_trippers.go:580]     Audit-Id: 89817c99-cc06-411d-b40d-f89432a8d119
	I0429 12:45:04.067336    3296 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 12:45:04.067490    3296 api_server.go:141] control plane version: v1.30.0
	I0429 12:45:04.067599    3296 api_server.go:131] duration metric: took 14.0544ms to wait for apiserver health ...
	I0429 12:45:04.067645    3296 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 12:45:04.206940    3296 request.go:629] Waited for 139.2937ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods
	I0429 12:45:04.207142    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods
	I0429 12:45:04.207142    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:04.207142    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:04.207142    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:04.212478    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:45:04.212478    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:04.212478    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:04.212478    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:04.212478    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:04.212478    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:04.212478    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:04 GMT
	I0429 12:45:04.212478    3296 round_trippers.go:580]     Audit-Id: ec425d83-0f1a-431c-b584-2765f718b45d
	I0429 12:45:04.215302    3296 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"418","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0429 12:45:04.218767    3296 system_pods.go:59] 8 kube-system pods found
	I0429 12:45:04.218767    3296 system_pods.go:61] "coredns-7db6d8ff4d-ctb8n" [1141a626-d4ac-4826-a912-7b7ed378b013] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "etcd-multinode-409200" [d181e36d-2901-4660-a441-6f6b5f3d6c5f] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "kindnet-xj48j" [adefd380-e946-47ff-b57c-3baa04e6f99c] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "kube-apiserver-multinode-409200" [da427161-547d-4e8d-a545-8b243ce10f12] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "kube-controller-manager-multinode-409200" [bc75101f-63f2-4b41-a912-4d015c4fd4aa] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "kube-proxy-g2jp8" [d2c926f8-0701-483c-84ae-295e7bb08fc9] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "kube-scheduler-multinode-409200" [6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266] Running
	I0429 12:45:04.218767    3296 system_pods.go:61] "storage-provisioner" [a200a31d-7fe5-4ebd-b4ea-f8ae593de3f9] Running
	I0429 12:45:04.218767    3296 system_pods.go:74] duration metric: took 151.1208ms to wait for pod list to return data ...
	I0429 12:45:04.218767    3296 default_sa.go:34] waiting for default service account to be created ...
	I0429 12:45:04.407309    3296 request.go:629] Waited for 188.5405ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/default/serviceaccounts
	I0429 12:45:04.407617    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/default/serviceaccounts
	I0429 12:45:04.407617    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:04.407617    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:04.407617    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:04.411864    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:45:04.411864    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:04.411864    3296 round_trippers.go:580]     Audit-Id: c1ebd2d8-a1e9-4583-a374-eec2950e9945
	I0429 12:45:04.411864    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:04.411864    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:04.411864    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:04.411864    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:04.411864    3296 round_trippers.go:580]     Content-Length: 261
	I0429 12:45:04.411864    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:04 GMT
	I0429 12:45:04.411864    3296 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1c200474-8705-40aa-8512-ec20a74a9ff0","resourceVersion":"323","creationTimestamp":"2024-04-29T12:44:46Z"}}]}
	I0429 12:45:04.411864    3296 default_sa.go:45] found service account: "default"
	I0429 12:45:04.411864    3296 default_sa.go:55] duration metric: took 193.0951ms for default service account to be created ...
	I0429 12:45:04.411864    3296 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 12:45:04.596858    3296 request.go:629] Waited for 184.2596ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods
	I0429 12:45:04.596972    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods
	I0429 12:45:04.596972    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:04.597047    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:04.597047    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:04.602297    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:45:04.602297    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:04.602297    3296 round_trippers.go:580]     Audit-Id: 9d8cdeef-7574-4196-af75-9235e7830d44
	I0429 12:45:04.602297    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:04.602297    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:04.602297    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:04.602297    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:04.602297    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:04 GMT
	I0429 12:45:04.604234    3296 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"418","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0429 12:45:04.607126    3296 system_pods.go:86] 8 kube-system pods found
	I0429 12:45:04.607339    3296 system_pods.go:89] "coredns-7db6d8ff4d-ctb8n" [1141a626-d4ac-4826-a912-7b7ed378b013] Running
	I0429 12:45:04.607339    3296 system_pods.go:89] "etcd-multinode-409200" [d181e36d-2901-4660-a441-6f6b5f3d6c5f] Running
	I0429 12:45:04.607339    3296 system_pods.go:89] "kindnet-xj48j" [adefd380-e946-47ff-b57c-3baa04e6f99c] Running
	I0429 12:45:04.607339    3296 system_pods.go:89] "kube-apiserver-multinode-409200" [da427161-547d-4e8d-a545-8b243ce10f12] Running
	I0429 12:45:04.607339    3296 system_pods.go:89] "kube-controller-manager-multinode-409200" [bc75101f-63f2-4b41-a912-4d015c4fd4aa] Running
	I0429 12:45:04.607339    3296 system_pods.go:89] "kube-proxy-g2jp8" [d2c926f8-0701-483c-84ae-295e7bb08fc9] Running
	I0429 12:45:04.607528    3296 system_pods.go:89] "kube-scheduler-multinode-409200" [6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266] Running
	I0429 12:45:04.607528    3296 system_pods.go:89] "storage-provisioner" [a200a31d-7fe5-4ebd-b4ea-f8ae593de3f9] Running
	I0429 12:45:04.607528    3296 system_pods.go:126] duration metric: took 195.6622ms to wait for k8s-apps to be running ...
	I0429 12:45:04.607528    3296 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 12:45:04.620020    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:45:04.646095    3296 system_svc.go:56] duration metric: took 38.5674ms WaitForService to wait for kubelet
	I0429 12:45:04.646095    3296 kubeadm.go:576] duration metric: took 17.5229576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:45:04.646246    3296 node_conditions.go:102] verifying NodePressure condition ...
	I0429 12:45:04.798355    3296 request.go:629] Waited for 151.798ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/nodes
	I0429 12:45:04.798670    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes
	I0429 12:45:04.798670    3296 round_trippers.go:469] Request Headers:
	I0429 12:45:04.798670    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:45:04.798670    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:45:04.807122    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:45:04.807122    3296 round_trippers.go:577] Response Headers:
	I0429 12:45:04.807122    3296 round_trippers.go:580]     Audit-Id: 27cd78e8-c916-4718-a2ef-21649bddc2f7
	I0429 12:45:04.807122    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:45:04.807122    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:45:04.807122    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:45:04.807122    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:45:04.807122    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:45:04 GMT
	I0429 12:45:04.807122    3296 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5013 chars]
	I0429 12:45:04.808046    3296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:45:04.808046    3296 node_conditions.go:123] node cpu capacity is 2
	I0429 12:45:04.808046    3296 node_conditions.go:105] duration metric: took 161.5684ms to run NodePressure ...
	I0429 12:45:04.808566    3296 start.go:240] waiting for startup goroutines ...
	I0429 12:45:04.808566    3296 start.go:245] waiting for cluster config update ...
	I0429 12:45:04.808566    3296 start.go:254] writing updated cluster config ...
	I0429 12:45:04.812510    3296 out.go:177] 
	I0429 12:45:04.815299    3296 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:45:04.823733    3296 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:45:04.824679    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 12:45:04.829645    3296 out.go:177] * Starting "multinode-409200-m02" worker node in "multinode-409200" cluster
	I0429 12:45:04.832482    3296 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:45:04.832482    3296 cache.go:56] Caching tarball of preloaded images
	I0429 12:45:04.833541    3296 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:45:04.833752    3296 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:45:04.833905    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 12:45:04.840362    3296 start.go:360] acquireMachinesLock for multinode-409200-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:45:04.840766    3296 start.go:364] duration metric: took 208µs to acquireMachinesLock for "multinode-409200-m02"
	I0429 12:45:04.841073    3296 start.go:93] Provisioning new machine with config: &{Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 12:45:04.841073    3296 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0429 12:45:04.844315    3296 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:45:04.844315    3296 start.go:159] libmachine.API.Create for "multinode-409200" (driver="hyperv")
	I0429 12:45:04.844315    3296 client.go:168] LocalClient.Create starting
	I0429 12:45:04.845902    3296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0429 12:45:04.846017    3296 main.go:141] libmachine: Decoding PEM data...
	I0429 12:45:04.846017    3296 main.go:141] libmachine: Parsing certificate...
	I0429 12:45:04.846017    3296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0429 12:45:04.846017    3296 main.go:141] libmachine: Decoding PEM data...
	I0429 12:45:04.846017    3296 main.go:141] libmachine: Parsing certificate...
	I0429 12:45:04.846673    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0429 12:45:06.804924    3296 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0429 12:45:06.805314    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:06.805397    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0429 12:45:08.591204    3296 main.go:141] libmachine: [stdout =====>] : False
	
	I0429 12:45:08.591826    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:08.592018    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 12:45:10.108966    3296 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 12:45:10.109042    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:10.109101    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 12:45:13.848914    3296 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 12:45:13.848914    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:13.851597    3296 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:45:14.372678    3296 main.go:141] libmachine: Creating SSH key...
	I0429 12:45:15.046114    3296 main.go:141] libmachine: Creating VM...
	I0429 12:45:15.046114    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0429 12:45:18.024556    3296 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0429 12:45:18.024648    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:18.024648    3296 main.go:141] libmachine: Using switch "Default Switch"
	I0429 12:45:18.024648    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0429 12:45:19.830813    3296 main.go:141] libmachine: [stdout =====>] : True
	
	I0429 12:45:19.830994    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:19.830994    3296 main.go:141] libmachine: Creating VHD
	I0429 12:45:19.831082    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0429 12:45:23.499514    3296 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 8687AA4C-C137-44FB-9D96-F96300160B58
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0429 12:45:23.499514    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:23.499514    3296 main.go:141] libmachine: Writing magic tar header
	I0429 12:45:23.499514    3296 main.go:141] libmachine: Writing SSH key tar header
	I0429 12:45:23.509648    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0429 12:45:26.692403    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:26.692627    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:26.692685    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\disk.vhd' -SizeBytes 20000MB
	I0429 12:45:29.255187    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:29.255187    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:29.255187    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-409200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0429 12:45:32.923583    3296 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-409200-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0429 12:45:32.923583    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:32.923583    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-409200-m02 -DynamicMemoryEnabled $false
	I0429 12:45:35.190827    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:35.190827    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:35.190827    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-409200-m02 -Count 2
	I0429 12:45:37.361678    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:37.361678    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:37.362213    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-409200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\boot2docker.iso'
	I0429 12:45:39.984208    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:39.984208    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:39.984208    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-409200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\disk.vhd'
	I0429 12:45:42.658479    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:42.659184    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:42.659184    3296 main.go:141] libmachine: Starting VM...
	I0429 12:45:42.659184    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-409200-m02
	I0429 12:45:45.749580    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:45.750057    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:45.750057    3296 main.go:141] libmachine: Waiting for host to start...
	I0429 12:45:45.750057    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:45:48.069884    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:45:48.069884    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:48.070148    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:45:50.607310    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:50.607310    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:51.618434    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:45:53.814057    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:45:53.814268    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:53.814268    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:45:56.396318    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:45:56.396408    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:57.400139    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:45:59.628138    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:45:59.629129    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:45:59.629209    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:02.151932    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:46:02.152954    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:03.162424    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:05.370899    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:05.370899    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:05.370899    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:07.948312    3296 main.go:141] libmachine: [stdout =====>] : 
	I0429 12:46:07.949519    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:08.958127    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:11.175506    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:11.175506    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:11.175506    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:13.895916    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:13.895916    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:13.896838    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:16.080488    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:16.080488    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:16.080488    3296 machine.go:94] provisionDockerMachine start ...
	I0429 12:46:16.080488    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:18.280232    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:18.280232    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:18.280232    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:20.885470    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:20.885470    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:20.892986    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:46:20.905116    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:46:20.905163    3296 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 12:46:21.028078    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 12:46:21.028078    3296 buildroot.go:166] provisioning hostname "multinode-409200-m02"
	I0429 12:46:21.028078    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:23.222003    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:23.222672    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:23.222863    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:25.813982    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:25.814625    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:25.820174    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:46:25.820865    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:46:25.820865    3296 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-409200-m02 && echo "multinode-409200-m02" | sudo tee /etc/hostname
	I0429 12:46:25.976952    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-409200-m02
	
	I0429 12:46:25.977060    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:28.125621    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:28.125621    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:28.125621    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:30.696159    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:30.696159    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:30.703315    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:46:30.703999    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:46:30.703999    3296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-409200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-409200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-409200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:46:30.842446    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:46:30.842446    3296 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 12:46:30.842446    3296 buildroot.go:174] setting up certificates
	I0429 12:46:30.842446    3296 provision.go:84] configureAuth start
	I0429 12:46:30.842446    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:32.965275    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:32.966273    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:32.966273    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:35.565457    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:35.565565    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:35.565565    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:37.707815    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:37.708682    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:37.708741    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:40.310992    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:40.311263    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:40.311263    3296 provision.go:143] copyHostCerts
	I0429 12:46:40.311498    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 12:46:40.312060    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 12:46:40.312148    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 12:46:40.312647    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 12:46:40.313776    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 12:46:40.313776    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 12:46:40.313776    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 12:46:40.314652    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 12:46:40.315444    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 12:46:40.316176    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 12:46:40.316176    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 12:46:40.316251    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 12:46:40.317490    3296 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-409200-m02 san=[127.0.0.1 172.26.183.208 localhost minikube multinode-409200-m02]
	I0429 12:46:40.489533    3296 provision.go:177] copyRemoteCerts
	I0429 12:46:40.500914    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:46:40.500914    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:42.648444    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:42.648500    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:42.648500    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:45.288552    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:45.288887    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:45.289051    3296 sshutil.go:53] new ssh client: &{IP:172.26.183.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\id_rsa Username:docker}
	I0429 12:46:45.400108    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8991564s)
	I0429 12:46:45.400108    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 12:46:45.400765    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 12:46:45.454027    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 12:46:45.454114    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0429 12:46:45.506432    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 12:46:45.506860    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 12:46:45.561859    3296 provision.go:87] duration metric: took 14.7192983s to configureAuth
	I0429 12:46:45.561945    3296 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:46:45.562643    3296 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:46:45.562708    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:47.764121    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:47.764121    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:47.765116    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:50.332541    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:50.332943    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:50.339542    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:46:50.339686    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:46:50.339686    3296 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:46:50.481784    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:46:50.481784    3296 buildroot.go:70] root file system type: tmpfs
	I0429 12:46:50.482020    3296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:46:50.482148    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:52.705401    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:52.705401    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:52.706452    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:46:55.300419    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:46:55.300611    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:55.307533    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:46:55.307664    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:46:55.307664    3296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.26.185.116"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:46:55.472533    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.26.185.116
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:46:55.472683    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:46:57.597428    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:46:57.597485    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:46:57.597485    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:00.149249    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:00.149249    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:00.156116    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:00.156454    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:47:00.156454    3296 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:47:02.385337    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:47:02.385337    3296 machine.go:97] duration metric: took 46.3044876s to provisionDockerMachine
	I0429 12:47:02.385440    3296 client.go:171] duration metric: took 1m57.5392233s to LocalClient.Create
	I0429 12:47:02.385440    3296 start.go:167] duration metric: took 1m57.5402109s to libmachine.API.Create "multinode-409200"
	I0429 12:47:02.385525    3296 start.go:293] postStartSetup for "multinode-409200-m02" (driver="hyperv")
	I0429 12:47:02.385566    3296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:47:02.399065    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:47:02.399065    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:04.523660    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:04.523741    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:04.523741    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:07.089102    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:07.089102    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:07.089875    3296 sshutil.go:53] new ssh client: &{IP:172.26.183.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\id_rsa Username:docker}
	I0429 12:47:07.199491    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8003032s)
	I0429 12:47:07.213686    3296 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:47:07.222251    3296 command_runner.go:130] > NAME=Buildroot
	I0429 12:47:07.222251    3296 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 12:47:07.222251    3296 command_runner.go:130] > ID=buildroot
	I0429 12:47:07.222251    3296 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 12:47:07.222251    3296 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 12:47:07.222251    3296 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:47:07.222251    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 12:47:07.222845    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 12:47:07.223880    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 12:47:07.223966    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 12:47:07.236998    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:47:07.258879    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 12:47:07.309777    3296 start.go:296] duration metric: took 4.9241487s for postStartSetup
	I0429 12:47:07.312753    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:09.488725    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:09.490183    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:09.490183    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:12.098690    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:12.098690    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:12.098925    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 12:47:12.101723    3296 start.go:128] duration metric: took 2m7.2596607s to createHost
	I0429 12:47:12.101862    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:14.247044    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:14.247280    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:14.247280    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:16.891372    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:16.891372    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:16.899005    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:16.899165    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:47:16.899165    3296 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:47:17.035974    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714394837.032590502
	
	I0429 12:47:17.036116    3296 fix.go:216] guest clock: 1714394837.032590502
	I0429 12:47:17.036116    3296 fix.go:229] Guest: 2024-04-29 12:47:17.032590502 +0000 UTC Remote: 2024-04-29 12:47:12.1017238 +0000 UTC m=+348.223296901 (delta=4.930866702s)
	I0429 12:47:17.036116    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:19.226390    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:19.226390    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:19.226772    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:21.808839    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:21.808839    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:21.815751    3296 main.go:141] libmachine: Using SSH client type: native
	I0429 12:47:21.815751    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.183.208 22 <nil> <nil>}
	I0429 12:47:21.815751    3296 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714394837
	I0429 12:47:21.956676    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 12:47:17 UTC 2024
	
	I0429 12:47:21.956676    3296 fix.go:236] clock set: Mon Apr 29 12:47:17 UTC 2024
	 (err=<nil>)
	I0429 12:47:21.956676    3296 start.go:83] releasing machines lock for "multinode-409200-m02", held for 2m17.1145367s
	I0429 12:47:21.956676    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:24.097712    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:24.097787    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:24.097844    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:26.668092    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:26.668471    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:26.671159    3296 out.go:177] * Found network options:
	I0429 12:47:26.674110    3296 out.go:177]   - NO_PROXY=172.26.185.116
	W0429 12:47:26.677026    3296 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:47:26.679171    3296 out.go:177]   - NO_PROXY=172.26.185.116
	W0429 12:47:26.681764    3296 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 12:47:26.683273    3296 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:47:26.686471    3296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:47:26.686593    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:26.698860    3296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 12:47:26.699858    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 12:47:28.889929    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:28.889929    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:28.890453    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:28.924115    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:28.924115    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:28.924115    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:31.610321    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:31.610321    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:31.611316    3296 sshutil.go:53] new ssh client: &{IP:172.26.183.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\id_rsa Username:docker}
	I0429 12:47:31.638337    3296 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 12:47:31.638337    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:31.638337    3296 sshutil.go:53] new ssh client: &{IP:172.26.183.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\id_rsa Username:docker}
	I0429 12:47:31.772452    3296 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 12:47:31.772719    3296 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0429 12:47:31.772719    3296 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.073819s)
	I0429 12:47:31.772719    3296 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0862083s)
	W0429 12:47:31.772719    3296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:47:31.788178    3296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:47:31.823416    3296 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 12:47:31.823555    3296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:47:31.823638    3296 start.go:494] detecting cgroup driver to use...
	I0429 12:47:31.823807    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:47:31.864032    3296 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 12:47:31.877665    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:47:31.915242    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:47:31.938595    3296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:47:31.951828    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:47:31.988717    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:47:32.025441    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:47:32.061177    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:47:32.097777    3296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:47:32.133151    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:47:32.172091    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:47:32.207948    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:47:32.240923    3296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:47:32.262425    3296 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 12:47:32.275413    3296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:47:32.307262    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:32.522716    3296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:47:32.557110    3296 start.go:494] detecting cgroup driver to use...
	I0429 12:47:32.569222    3296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:47:32.595469    3296 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 12:47:32.595469    3296 command_runner.go:130] > [Unit]
	I0429 12:47:32.595469    3296 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 12:47:32.595469    3296 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 12:47:32.595469    3296 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 12:47:32.595469    3296 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 12:47:32.595469    3296 command_runner.go:130] > StartLimitBurst=3
	I0429 12:47:32.595469    3296 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 12:47:32.595469    3296 command_runner.go:130] > [Service]
	I0429 12:47:32.595469    3296 command_runner.go:130] > Type=notify
	I0429 12:47:32.595469    3296 command_runner.go:130] > Restart=on-failure
	I0429 12:47:32.595469    3296 command_runner.go:130] > Environment=NO_PROXY=172.26.185.116
	I0429 12:47:32.595469    3296 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 12:47:32.595469    3296 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 12:47:32.595469    3296 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 12:47:32.595469    3296 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 12:47:32.595469    3296 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 12:47:32.595469    3296 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 12:47:32.595469    3296 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 12:47:32.595469    3296 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 12:47:32.595469    3296 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 12:47:32.595469    3296 command_runner.go:130] > ExecStart=
	I0429 12:47:32.595469    3296 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 12:47:32.595469    3296 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 12:47:32.595469    3296 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 12:47:32.595469    3296 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 12:47:32.595469    3296 command_runner.go:130] > LimitNOFILE=infinity
	I0429 12:47:32.595469    3296 command_runner.go:130] > LimitNPROC=infinity
	I0429 12:47:32.595469    3296 command_runner.go:130] > LimitCORE=infinity
	I0429 12:47:32.595469    3296 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 12:47:32.595469    3296 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 12:47:32.595469    3296 command_runner.go:130] > TasksMax=infinity
	I0429 12:47:32.595469    3296 command_runner.go:130] > TimeoutStartSec=0
	I0429 12:47:32.595469    3296 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 12:47:32.595469    3296 command_runner.go:130] > Delegate=yes
	I0429 12:47:32.595469    3296 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 12:47:32.595469    3296 command_runner.go:130] > KillMode=process
	I0429 12:47:32.596007    3296 command_runner.go:130] > [Install]
	I0429 12:47:32.596007    3296 command_runner.go:130] > WantedBy=multi-user.target
	I0429 12:47:32.609427    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:47:32.647237    3296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:47:32.693747    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:47:32.746682    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:47:32.787047    3296 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:47:32.851495    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:47:32.878304    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:47:32.915396    3296 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 12:47:32.926454    3296 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:47:32.932598    3296 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 12:47:32.945905    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:47:32.962828    3296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:47:33.010724    3296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:47:33.221170    3296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:47:33.427612    3296 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:47:33.427711    3296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:47:33.476840    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:33.689420    3296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:47:36.262572    3296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5730625s)
	I0429 12:47:36.276209    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 12:47:36.315570    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:47:36.358605    3296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 12:47:36.588183    3296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 12:47:36.818736    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:37.036451    3296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 12:47:37.082193    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:47:37.120569    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:37.346985    3296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 12:47:37.463969    3296 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 12:47:37.477867    3296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 12:47:37.487999    3296 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 12:47:37.488182    3296 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 12:47:37.488229    3296 command_runner.go:130] > Device: 0,22	Inode: 883         Links: 1
	I0429 12:47:37.488229    3296 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 12:47:37.488229    3296 command_runner.go:130] > Access: 2024-04-29 12:47:37.365902792 +0000
	I0429 12:47:37.488229    3296 command_runner.go:130] > Modify: 2024-04-29 12:47:37.365902792 +0000
	I0429 12:47:37.488229    3296 command_runner.go:130] > Change: 2024-04-29 12:47:37.370902716 +0000
	I0429 12:47:37.488280    3296 command_runner.go:130] >  Birth: -
	I0429 12:47:37.488280    3296 start.go:562] Will wait 60s for crictl version
	I0429 12:47:37.501342    3296 ssh_runner.go:195] Run: which crictl
	I0429 12:47:37.507515    3296 command_runner.go:130] > /usr/bin/crictl
	I0429 12:47:37.521938    3296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:47:37.586264    3296 command_runner.go:130] > Version:  0.1.0
	I0429 12:47:37.586344    3296 command_runner.go:130] > RuntimeName:  docker
	I0429 12:47:37.586344    3296 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 12:47:37.586344    3296 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 12:47:37.586344    3296 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 12:47:37.596233    3296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:47:37.630211    3296 command_runner.go:130] > 26.0.2
	I0429 12:47:37.640278    3296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:47:37.670207    3296 command_runner.go:130] > 26.0.2
	I0429 12:47:37.673376    3296 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 12:47:37.677101    3296 out.go:177]   - env NO_PROXY=172.26.185.116
	I0429 12:47:37.680928    3296 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 12:47:37.685397    3296 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 12:47:37.685397    3296 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 12:47:37.685397    3296 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 12:47:37.685397    3296 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:da:8e:53 Flags:up|broadcast|multicast|running}
	I0429 12:47:37.687846    3296 ip.go:210] interface addr: fe80::e4d4:6d70:21fb:68f3/64
	I0429 12:47:37.687846    3296 ip.go:210] interface addr: 172.26.176.1/20
	I0429 12:47:37.704000    3296 ssh_runner.go:195] Run: grep 172.26.176.1	host.minikube.internal$ /etc/hosts
	I0429 12:47:37.711121    3296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:47:37.732971    3296 mustload.go:65] Loading cluster: multinode-409200
	I0429 12:47:37.733916    3296 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:47:37.734674    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:47:39.857268    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:39.857605    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:39.857678    3296 host.go:66] Checking if "multinode-409200" exists ...
	I0429 12:47:39.858356    3296 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200 for IP: 172.26.183.208
	I0429 12:47:39.858356    3296 certs.go:194] generating shared ca certs ...
	I0429 12:47:39.858356    3296 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:47:39.858892    3296 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 12:47:39.859101    3296 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 12:47:39.859625    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:47:39.859897    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:47:39.860079    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:47:39.860187    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:47:39.860949    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem (1338 bytes)
	W0429 12:47:39.861313    3296 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496_empty.pem, impossibly tiny 0 bytes
	I0429 12:47:39.861522    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0429 12:47:39.861732    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 12:47:39.862267    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 12:47:39.862709    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 12:47:39.863370    3296 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem (1708 bytes)
	I0429 12:47:39.863492    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /usr/share/ca-certificates/84962.pem
	I0429 12:47:39.863492    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:39.863492    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem -> /usr/share/ca-certificates/8496.pem
	I0429 12:47:39.864171    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:47:39.918166    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 12:47:39.972223    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:47:40.026549    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 12:47:40.080173    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /usr/share/ca-certificates/84962.pem (1708 bytes)
	I0429 12:47:40.130915    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:47:40.185551    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem --> /usr/share/ca-certificates/8496.pem (1338 bytes)
	I0429 12:47:40.257051    3296 ssh_runner.go:195] Run: openssl version
	I0429 12:47:40.266345    3296 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 12:47:40.281476    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84962.pem && ln -fs /usr/share/ca-certificates/84962.pem /etc/ssl/certs/84962.pem"
	I0429 12:47:40.326424    3296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84962.pem
	I0429 12:47:40.333587    3296 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 12:47:40.333690    3296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 12:47:40.347944    3296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84962.pem
	I0429 12:47:40.358483    3296 command_runner.go:130] > 3ec20f2e
	I0429 12:47:40.372154    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84962.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:47:40.407197    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:47:40.445101    3296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:40.454036    3296 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:40.454036    3296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:40.469854    3296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:47:40.480159    3296 command_runner.go:130] > b5213941
	I0429 12:47:40.494559    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:47:40.530557    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8496.pem && ln -fs /usr/share/ca-certificates/8496.pem /etc/ssl/certs/8496.pem"
	I0429 12:47:40.568929    3296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8496.pem
	I0429 12:47:40.576708    3296 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 12:47:40.576777    3296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 12:47:40.591154    3296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8496.pem
	I0429 12:47:40.603648    3296 command_runner.go:130] > 51391683
	I0429 12:47:40.618109    3296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8496.pem /etc/ssl/certs/51391683.0"
	I0429 12:47:40.655162    3296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:47:40.663232    3296 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:47:40.663949    3296 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:47:40.664121    3296 kubeadm.go:928] updating node {m02 172.26.183.208 8443 v1.30.0 docker false true} ...
	I0429 12:47:40.664340    3296 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-409200-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.183.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:47:40.679470    3296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:47:40.699614    3296 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	I0429 12:47:40.699653    3296 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 12:47:40.713121    3296 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 12:47:40.732579    3296 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 12:47:40.732665    3296 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0429 12:47:40.732665    3296 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0429 12:47:40.732818    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 12:47:40.732874    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 12:47:40.753036    3296 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 12:47:40.754002    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:47:40.754181    3296 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 12:47:40.760880    3296 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 12:47:40.760982    3296 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 12:47:40.760982    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 12:47:40.812628    3296 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 12:47:40.812855    3296 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 12:47:40.812775    3296 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 12:47:40.812925    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 12:47:40.826950    3296 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 12:47:40.886206    3296 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 12:47:40.886795    3296 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 12:47:40.886795    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 12:47:42.201325    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0429 12:47:42.222840    3296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0429 12:47:42.257904    3296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:47:42.307563    3296 ssh_runner.go:195] Run: grep 172.26.185.116	control-plane.minikube.internal$ /etc/hosts
	I0429 12:47:42.317117    3296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.185.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:47:42.358165    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:42.591954    3296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:47:42.625305    3296 host.go:66] Checking if "multinode-409200" exists ...
	I0429 12:47:42.626025    3296 start.go:316] joinCluster: &{Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:47:42.626218    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 12:47:42.626311    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 12:47:44.890078    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:47:44.890621    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:44.890685    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 12:47:47.528269    3296 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 12:47:47.528349    3296 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:47:47.528435    3296 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 12:47:47.732559    3296 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token llghd5.xhmkaosfb4roq849 --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a 
	I0429 12:47:47.732559    3296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.106248s)
	I0429 12:47:47.732559    3296 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 12:47:47.732559    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token llghd5.xhmkaosfb4roq849 --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-409200-m02"
	I0429 12:47:47.979768    3296 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:47:49.375923    3296 command_runner.go:130] > [preflight] Running pre-flight checks
	I0429 12:47:49.375982    3296 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0429 12:47:49.375982    3296 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0429 12:47:49.375982    3296 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:47:49.376091    3296 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:47:49.376091    3296 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 12:47:49.376091    3296 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:47:49.376091    3296 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001862975s
	I0429 12:47:49.376160    3296 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0429 12:47:49.376160    3296 command_runner.go:130] > This node has joined the cluster:
	I0429 12:47:49.376327    3296 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0429 12:47:49.376386    3296 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0429 12:47:49.376386    3296 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0429 12:47:49.376447    3296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token llghd5.xhmkaosfb4roq849 --discovery-token-ca-cert-hash sha256:7934064b7d5818da8a479083d1671ea82e512e044277d70994aac8d08fb8b51a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-409200-m02": (1.6438751s)
	I0429 12:47:49.376563    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 12:47:49.610407    3296 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0429 12:47:49.839420    3296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-409200-m02 minikube.k8s.io/updated_at=2024_04_29T12_47_49_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d minikube.k8s.io/name=multinode-409200 minikube.k8s.io/primary=false
	I0429 12:47:49.973372    3296 command_runner.go:130] > node/multinode-409200-m02 labeled
	I0429 12:47:49.973997    3296 start.go:318] duration metric: took 7.3479147s to joinCluster
	I0429 12:47:49.974087    3296 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0429 12:47:49.980586    3296 out.go:177] * Verifying Kubernetes components...
	I0429 12:47:49.974888    3296 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:47:49.995587    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:47:50.223514    3296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:47:50.254494    3296 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 12:47:50.257555    3296 kapi.go:59] client config for multinode-409200: &rest.Config{Host:"https://172.26.185.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:47:50.258504    3296 node_ready.go:35] waiting up to 6m0s for node "multinode-409200-m02" to be "Ready" ...
	I0429 12:47:50.258504    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:50.258504    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:50.258504    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:50.258504    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:50.277046    3296 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0429 12:47:50.277046    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:50.277046    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:50.277046    3296 round_trippers.go:580]     Content-Length: 3921
	I0429 12:47:50.277046    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:50 GMT
	I0429 12:47:50.277046    3296 round_trippers.go:580]     Audit-Id: 1aa7904c-6305-4ec6-bae7-4b076ad2e827
	I0429 12:47:50.277046    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:50.277046    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:50.277046    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:50.277551    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"583","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0429 12:47:50.773175    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:50.773175    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:50.773175    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:50.773175    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:50.777032    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:47:50.777685    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:50.777685    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:50 GMT
	I0429 12:47:50.777685    3296 round_trippers.go:580]     Audit-Id: e4937045-09b1-472b-9826-805039567d77
	I0429 12:47:50.777685    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:50.777685    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:50.777685    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:50.777685    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:50.777685    3296 round_trippers.go:580]     Content-Length: 3921
	I0429 12:47:50.777825    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"583","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0429 12:47:51.273583    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:51.273583    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:51.273583    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:51.273583    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:51.282795    3296 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 12:47:51.283663    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:51.283663    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:51.283663    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:51.283663    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:51.283663    3296 round_trippers.go:580]     Content-Length: 3921
	I0429 12:47:51.283663    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:51 GMT
	I0429 12:47:51.283663    3296 round_trippers.go:580]     Audit-Id: 03d6bb61-7e8f-4b5e-8dc6-ad4f82291662
	I0429 12:47:51.283748    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:51.283845    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"583","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0429 12:47:51.760217    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:51.760217    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:51.760217    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:51.760217    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:51.767312    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:47:51.767938    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:51.767938    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:51 GMT
	I0429 12:47:51.767938    3296 round_trippers.go:580]     Audit-Id: d0e8cb5b-5939-44df-ad18-19d37e8cba55
	I0429 12:47:51.767938    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:51.767987    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:51.767987    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:51.767987    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:51.768019    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:51.768019    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:52.261334    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:52.261334    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:52.261334    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:52.261334    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:52.267062    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:47:52.267062    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:52.267062    3296 round_trippers.go:580]     Audit-Id: 25f6fc60-cd0b-4848-9be9-c476f74565e8
	I0429 12:47:52.267062    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:52.267451    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:52.267451    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:52.267451    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:52.267451    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:52.267451    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:52 GMT
	I0429 12:47:52.267451    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:52.267987    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:47:52.762151    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:52.762704    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:52.762704    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:52.762704    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:52.766898    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:52.766898    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:52.767375    3296 round_trippers.go:580]     Audit-Id: 14d1a1d4-c569-4523-ae8f-85dcf4ae0441
	I0429 12:47:52.767375    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:52.767375    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:52.767375    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:52.767375    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:52.767375    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:52.767375    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:52 GMT
	I0429 12:47:52.767548    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:53.265576    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:53.265576    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:53.265576    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:53.265576    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:53.270147    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:53.270556    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:53.270556    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:53.270556    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:53.270556    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:53.270556    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:53.270647    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:53 GMT
	I0429 12:47:53.270647    3296 round_trippers.go:580]     Audit-Id: 1c2e0aaf-61ef-4ae0-8c6b-2a6ebe793d07
	I0429 12:47:53.270647    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:53.270797    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:53.772050    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:53.772115    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:53.772115    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:53.772115    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:53.776298    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:53.776298    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:53.776407    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:53.776407    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:53.776407    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:53.776407    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:53.776407    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:53.776407    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:53 GMT
	I0429 12:47:53.776556    3296 round_trippers.go:580]     Audit-Id: 650c8085-cdb7-4f97-be72-505b96355229
	I0429 12:47:53.776664    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:54.272406    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:54.272406    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:54.272406    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:54.272406    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:54.277043    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:54.277043    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:54.277043    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:54.277043    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:54.277043    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:54 GMT
	I0429 12:47:54.277721    3296 round_trippers.go:580]     Audit-Id: efeda477-6b13-4a7f-8e6e-8ca984d592e0
	I0429 12:47:54.277721    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:54.277721    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:54.277721    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:54.277805    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:54.277899    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:47:54.761952    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:54.761952    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:54.761952    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:54.761952    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:54.770579    3296 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 12:47:54.770847    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:54.770847    3296 round_trippers.go:580]     Audit-Id: b2ebdb7e-6e55-4343-9e4f-d6ca42f04044
	I0429 12:47:54.770847    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:54.770847    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:54.770847    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:54.770847    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:54.770934    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:54.770934    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:54 GMT
	I0429 12:47:54.771140    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:55.267085    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:55.267085    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:55.267085    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:55.267085    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:55.274773    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:47:55.274773    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:55.274773    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:55.274773    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:55 GMT
	I0429 12:47:55.274773    3296 round_trippers.go:580]     Audit-Id: 8d01f411-0e28-4e00-98c7-c840216695b8
	I0429 12:47:55.274773    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:55.274773    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:55.274773    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:55.274773    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:55.275758    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:55.766800    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:55.766860    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:55.766860    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:55.766860    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:55.772166    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:47:55.772166    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:55.772166    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:55.772166    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:55.772166    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:55.772166    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:55.772166    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:55.772166    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:55 GMT
	I0429 12:47:55.772166    3296 round_trippers.go:580]     Audit-Id: 834aa731-1ee4-4c74-ade3-554a90de45da
	I0429 12:47:55.772166    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:56.259313    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:56.259397    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:56.259397    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:56.259397    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:56.262998    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:47:56.262998    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:56.262998    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:56.262998    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:56 GMT
	I0429 12:47:56.262998    3296 round_trippers.go:580]     Audit-Id: bd25cc06-f1d2-4ce8-b018-c6ceca63b38b
	I0429 12:47:56.262998    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:56.262998    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:56.262998    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:56.262998    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:56.262998    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:56.766384    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:56.766384    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:56.766384    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:56.766384    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:56.771982    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:47:56.771982    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:56.772048    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:56.772048    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:56.772048    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:56.772048    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:56 GMT
	I0429 12:47:56.772048    3296 round_trippers.go:580]     Audit-Id: 6d0a1606-3048-4a88-af62-130b8e76e2dc
	I0429 12:47:56.772048    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:56.772048    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:56.772252    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:56.772493    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:47:57.259348    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:57.259348    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:57.259348    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:57.259348    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:57.264741    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:47:57.264741    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:57.264741    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:57.264741    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:57 GMT
	I0429 12:47:57.264741    3296 round_trippers.go:580]     Audit-Id: bfaae552-1eaa-47ae-94ea-9f0308003f82
	I0429 12:47:57.264741    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:57.264741    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:57.264741    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:57.264741    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:57.264741    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:57.764591    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:57.764797    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:57.764797    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:57.764867    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:57.768479    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:47:57.768918    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:57.768918    3296 round_trippers.go:580]     Audit-Id: 9bb134cf-d970-4ec1-9255-e635219f5243
	I0429 12:47:57.768918    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:57.768918    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:57.768918    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:57.768993    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:57.769024    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:57.769024    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:57 GMT
	I0429 12:47:57.769164    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:58.273211    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:58.273211    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:58.273211    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:58.273211    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:58.277807    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:58.277807    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:58.277807    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:58.277807    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:58.277807    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:58 GMT
	I0429 12:47:58.277807    3296 round_trippers.go:580]     Audit-Id: d8b2db52-e22e-4852-953f-768d65a1f21e
	I0429 12:47:58.277807    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:58.277807    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:58.277807    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:58.277807    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:58.764134    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:58.764209    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:58.764230    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:58.764267    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:58.768270    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:58.768270    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:58.768270    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:58.768270    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:58.768355    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:58.768355    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:58 GMT
	I0429 12:47:58.768355    3296 round_trippers.go:580]     Audit-Id: cbbabbc5-59cb-4870-bd7e-70382a66be88
	I0429 12:47:58.768355    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:58.768355    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:58.768543    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:59.272633    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:59.272633    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:59.272633    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:59.272633    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:59.277493    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:47:59.277493    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:59.277493    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:59 GMT
	I0429 12:47:59.277493    3296 round_trippers.go:580]     Audit-Id: bca1926c-6392-44d2-a4cc-d4cbbd6f6a9a
	I0429 12:47:59.277493    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:59.277493    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:59.277493    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:59.277493    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:59.277493    3296 round_trippers.go:580]     Content-Length: 4030
	I0429 12:47:59.277691    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"589","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0429 12:47:59.277691    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:47:59.763504    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:47:59.763504    3296 round_trippers.go:469] Request Headers:
	I0429 12:47:59.763504    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:47:59.763504    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:47:59.769826    3296 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:47:59.769826    3296 round_trippers.go:577] Response Headers:
	I0429 12:47:59.769826    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:47:59.769826    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:47:59 GMT
	I0429 12:47:59.769826    3296 round_trippers.go:580]     Audit-Id: 955a0863-ccf6-47ac-a93f-f1d961e0cda3
	I0429 12:47:59.769826    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:47:59.769826    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:47:59.769826    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:47:59.769826    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:00.262517    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:00.262587    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:00.262587    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:00.262587    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:00.270447    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:48:00.270447    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:00.270447    3296 round_trippers.go:580]     Audit-Id: db443f2d-d881-4293-b30b-75cad07002c2
	I0429 12:48:00.270447    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:00.270447    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:00.270447    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:00.270447    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:00.270447    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:00 GMT
	I0429 12:48:00.270447    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:00.759504    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:00.759735    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:00.759735    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:00.759735    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:00.765102    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:48:00.765102    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:00.765398    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:00.765398    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:00.765398    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:00.765398    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:00 GMT
	I0429 12:48:00.765398    3296 round_trippers.go:580]     Audit-Id: 1805a419-555f-4cad-8ada-e15690b29346
	I0429 12:48:00.765398    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:00.765788    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:01.263481    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:01.263481    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:01.263481    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:01.263481    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:01.267090    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:01.267090    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:01.267090    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:01.267090    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:01.267848    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:01.267848    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:01 GMT
	I0429 12:48:01.267848    3296 round_trippers.go:580]     Audit-Id: c0826b49-0f93-45aa-be31-8840b0185ff5
	I0429 12:48:01.267848    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:01.268076    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:01.765912    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:01.765912    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:01.765912    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:01.765912    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:01.768972    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:01.768972    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:01.768972    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:01.768972    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:01.768972    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:01 GMT
	I0429 12:48:01.768972    3296 round_trippers.go:580]     Audit-Id: f3db533c-fdd9-4604-baed-603c4f98caa5
	I0429 12:48:01.768972    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:01.768972    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:01.769782    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:01.770459    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:48:02.259098    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:02.259098    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:02.259098    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:02.259098    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:02.264247    3296 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:48:02.264310    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:02.264310    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:02 GMT
	I0429 12:48:02.264310    3296 round_trippers.go:580]     Audit-Id: 5aa046f0-9575-4a01-bb1c-bf41a8778174
	I0429 12:48:02.264310    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:02.264310    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:02.264310    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:02.264310    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:02.264464    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:02.766902    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:02.766902    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:02.766902    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:02.766902    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:02.770362    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:02.771226    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:02.771226    3296 round_trippers.go:580]     Audit-Id: 6cdbcd73-477a-4bd7-8865-15a410e6d91e
	I0429 12:48:02.771226    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:02.771226    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:02.771226    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:02.771226    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:02.771307    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:02 GMT
	I0429 12:48:02.771606    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:03.271334    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:03.271334    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:03.271334    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:03.271334    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:03.275931    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:03.275931    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:03.275931    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:03.275931    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:03 GMT
	I0429 12:48:03.275931    3296 round_trippers.go:580]     Audit-Id: 2db591d0-97e2-4d0b-8d4f-60a045b4b473
	I0429 12:48:03.275931    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:03.275931    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:03.275931    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:03.275931    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:03.760823    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:03.760887    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:03.760952    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:03.760952    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:03.764565    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:03.764719    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:03.764719    3296 round_trippers.go:580]     Audit-Id: 4bd352bf-9473-479a-82ac-386bf52f710b
	I0429 12:48:03.764719    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:03.764719    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:03.764719    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:03.764719    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:03.764719    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:03 GMT
	I0429 12:48:03.764892    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:04.271987    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:04.271987    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:04.271987    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:04.271987    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:04.275972    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:04.276630    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:04.276630    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:04.276630    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:04.276630    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:04.276630    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:04 GMT
	I0429 12:48:04.276630    3296 round_trippers.go:580]     Audit-Id: ba8439d6-b081-4ef6-98d0-d3df255318f8
	I0429 12:48:04.276630    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:04.276914    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:04.277199    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:48:04.765191    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:04.765263    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:04.765263    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:04.765294    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:04.768620    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:04.768620    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:04.769281    3296 round_trippers.go:580]     Audit-Id: cb134310-a03b-4069-a517-f799ccab4010
	I0429 12:48:04.769281    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:04.769281    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:04.769281    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:04.769281    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:04.769281    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:04 GMT
	I0429 12:48:04.769572    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:05.259044    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:05.259044    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:05.259044    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:05.259044    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:05.261814    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:48:05.261814    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:05.261814    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:05 GMT
	I0429 12:48:05.262710    3296 round_trippers.go:580]     Audit-Id: 04148dfe-eeec-48a6-9915-4c5b416cd3d4
	I0429 12:48:05.262710    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:05.262710    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:05.262710    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:05.262838    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:05.262856    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:05.761517    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:05.761608    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:05.761608    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:05.761608    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:05.765654    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:05.765930    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:05.765930    3296 round_trippers.go:580]     Audit-Id: 8cbdf49f-3b0c-4a29-ab98-997512edc7f9
	I0429 12:48:05.765930    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:05.765930    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:05.765930    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:05.765930    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:05.765930    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:05 GMT
	I0429 12:48:05.766810    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:06.265026    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:06.265026    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:06.265026    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:06.265153    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:06.269016    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:06.269016    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:06.269016    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:06.269016    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:06.269016    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:06.269016    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:06 GMT
	I0429 12:48:06.269016    3296 round_trippers.go:580]     Audit-Id: 282f730b-2e0e-4652-8968-b1ba746e4a29
	I0429 12:48:06.269016    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:06.269586    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:06.759190    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:06.759284    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:06.759284    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:06.759284    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:06.765962    3296 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:48:06.765962    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:06.765962    3296 round_trippers.go:580]     Audit-Id: f392bda5-7aad-41cf-85f9-7274c03e30e1
	I0429 12:48:06.765962    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:06.765962    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:06.765962    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:06.765962    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:06.765962    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:06 GMT
	I0429 12:48:06.765962    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:06.766988    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:48:07.265922    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:07.265922    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:07.265922    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:07.265922    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:07.272554    3296 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 12:48:07.272554    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:07.272554    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:07.272554    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:07.272554    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:07.272554    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:07 GMT
	I0429 12:48:07.272554    3296 round_trippers.go:580]     Audit-Id: 44a4d7ef-d03a-425c-a66f-060a35d40b90
	I0429 12:48:07.272554    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:07.273368    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:07.766378    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:07.766378    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:07.766459    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:07.766459    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:07.770741    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:07.770741    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:07.770741    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:07.771456    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:07.771456    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:07.771456    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:07.771456    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:07 GMT
	I0429 12:48:07.771456    3296 round_trippers.go:580]     Audit-Id: 939a3f92-4ee9-4114-8d4b-26ebd919b43f
	I0429 12:48:07.771877    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:08.271681    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:08.271681    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:08.271751    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:08.271751    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:08.275134    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:08.275134    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:08.275134    3296 round_trippers.go:580]     Audit-Id: 57b0b765-1aa9-4fb0-a7b7-39a603e784f8
	I0429 12:48:08.275134    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:08.275581    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:08.275581    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:08.275581    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:08.275581    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:08 GMT
	I0429 12:48:08.276135    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:08.761756    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:08.761756    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:08.761756    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:08.761840    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:08.766237    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:08.766553    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:08.766553    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:08.766553    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:08.766553    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:08 GMT
	I0429 12:48:08.766553    3296 round_trippers.go:580]     Audit-Id: 1238846c-05bd-4bab-be3c-2d0d495523e1
	I0429 12:48:08.766553    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:08.766553    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:08.767232    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:08.767779    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:48:09.273783    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:09.273783    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:09.273853    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:09.273853    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:09.277281    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:09.277281    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:09.277281    3296 round_trippers.go:580]     Audit-Id: f9e0f9ee-a658-41fd-b611-988a7b5e6905
	I0429 12:48:09.277281    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:09.277281    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:09.277281    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:09.277281    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:09.277281    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:09 GMT
	I0429 12:48:09.277281    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:09.773435    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:09.773435    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:09.773435    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:09.773435    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:09.778077    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:09.778077    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:09.778077    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:09.778077    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:09 GMT
	I0429 12:48:09.778077    3296 round_trippers.go:580]     Audit-Id: 13dfc8fa-d709-460a-83d9-be31b8d38a40
	I0429 12:48:09.778077    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:09.778077    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:09.778077    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:09.778077    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:10.273817    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:10.273873    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:10.273873    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:10.273873    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:10.278269    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:10.278269    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:10.278269    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:10.278269    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:10 GMT
	I0429 12:48:10.278269    3296 round_trippers.go:580]     Audit-Id: be6af11e-f775-49f7-976d-bccb19209c49
	I0429 12:48:10.278269    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:10.278867    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:10.278867    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:10.278994    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:10.770369    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:10.770452    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:10.770452    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:10.770452    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:10.774602    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:10.774602    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:10.774602    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:10.774602    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:10.774602    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:10 GMT
	I0429 12:48:10.775034    3296 round_trippers.go:580]     Audit-Id: d720d994-848d-4b2c-aef0-bf666190289f
	I0429 12:48:10.775034    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:10.775034    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:10.775097    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:10.775887    3296 node_ready.go:53] node "multinode-409200-m02" has status "Ready":"False"
	I0429 12:48:11.272722    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:11.272849    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:11.272849    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:11.272849    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:11.281710    3296 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 12:48:11.281710    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:11.281710    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:11.281710    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:11 GMT
	I0429 12:48:11.281710    3296 round_trippers.go:580]     Audit-Id: b03a569f-9b3c-4867-82ca-4ad703c59ff4
	I0429 12:48:11.281710    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:11.281710    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:11.281710    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:11.282825    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:11.771706    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:11.771747    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:11.771747    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:11.771747    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:11.776773    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:11.776773    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:11.776773    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:11.776773    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:11.776773    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:11.776773    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:11 GMT
	I0429 12:48:11.776773    3296 round_trippers.go:580]     Audit-Id: dff77a36-365e-4591-8f7d-06a6258a1e54
	I0429 12:48:11.776773    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:11.776773    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:12.272009    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:12.272085    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.272085    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.272085    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.276159    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:12.276159    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.276159    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.276159    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.276159    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.276159    3296 round_trippers.go:580]     Audit-Id: e0373aa0-d69a-4307-a088-fc917be35e5d
	I0429 12:48:12.276159    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.276159    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.277387    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"600","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0429 12:48:12.771145    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:12.771145    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.771145    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.771145    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.775119    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:12.775525    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.775525    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.775525    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.775525    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.775525    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.775525    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.775525    3296 round_trippers.go:580]     Audit-Id: 50b37234-dedd-41e6-9046-584be76d0e79
	I0429 12:48:12.775715    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"625","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0429 12:48:12.776327    3296 node_ready.go:49] node "multinode-409200-m02" has status "Ready":"True"
	I0429 12:48:12.776327    3296 node_ready.go:38] duration metric: took 22.5176475s for node "multinode-409200-m02" to be "Ready" ...
	I0429 12:48:12.776327    3296 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:48:12.776469    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods
	I0429 12:48:12.776469    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.776469    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.776469    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.784492    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:48:12.784492    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.784492    3296 round_trippers.go:580]     Audit-Id: 94d646b8-05cf-4d03-9b1b-3ef27e586afb
	I0429 12:48:12.784492    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.784492    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.784492    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.784492    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.784492    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.785404    3296 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"625"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"418","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70486 chars]
	I0429 12:48:12.790223    3296 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.790579    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 12:48:12.790648    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.790648    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.790648    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.793956    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:12.793956    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.793956    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.793956    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.793956    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.793956    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.793956    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.793956    3296 round_trippers.go:580]     Audit-Id: 388d1630-fe39-4ef2-8fb2-aad991435d61
	I0429 12:48:12.794511    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"418","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0429 12:48:12.795121    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:12.795121    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.795187    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.795187    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.798037    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:48:12.798162    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.798162    3296 round_trippers.go:580]     Audit-Id: cd7aa80d-9409-4761-937d-9eeb24a8d1ee
	I0429 12:48:12.798162    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.798162    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.798162    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.798162    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.798162    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.798351    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:48:12.798576    3296 pod_ready.go:92] pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:12.798576    3296 pod_ready.go:81] duration metric: took 8.2932ms for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.798576    3296 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.798576    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-409200
	I0429 12:48:12.798576    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.798576    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.798576    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.801224    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:48:12.802225    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.802225    3296 round_trippers.go:580]     Audit-Id: c10e3746-87e3-4f93-991b-058201592f85
	I0429 12:48:12.802225    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.802225    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.802225    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.802308    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.802308    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.802566    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-409200","namespace":"kube-system","uid":"d181e36d-2901-4660-a441-6f6b5f3d6c5f","resourceVersion":"381","creationTimestamp":"2024-04-29T12:44:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.185.116:2379","kubernetes.io/config.hash":"c66d644ea477a94b97c6ebe1092303ff","kubernetes.io/config.mirror":"c66d644ea477a94b97c6ebe1092303ff","kubernetes.io/config.seen":"2024-04-29T12:44:32.885743739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0429 12:48:12.803193    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:12.803254    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.803254    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.803254    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.806226    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:48:12.806226    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.806226    3296 round_trippers.go:580]     Audit-Id: a70fb09e-0dd5-4a3b-8869-47ac06f9e5bd
	I0429 12:48:12.806226    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.806226    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.806226    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.806226    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.806226    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.806226    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:48:12.806226    3296 pod_ready.go:92] pod "etcd-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:12.807257    3296 pod_ready.go:81] duration metric: took 8.6815ms for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.807257    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.807257    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-409200
	I0429 12:48:12.807257    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.807257    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.807257    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.821225    3296 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 12:48:12.821952    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.821952    3296 round_trippers.go:580]     Audit-Id: 4eb177f8-3f79-41c6-8259-f2bfc89fb2c9
	I0429 12:48:12.821952    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.821952    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.821952    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.821952    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.821952    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.822386    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-409200","namespace":"kube-system","uid":"da427161-547d-4e8d-a545-8b243ce10f12","resourceVersion":"380","creationTimestamp":"2024-04-29T12:44:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.185.116:8443","kubernetes.io/config.hash":"fab3ac6a5694131422285e941b90103f","kubernetes.io/config.mirror":"fab3ac6a5694131422285e941b90103f","kubernetes.io/config.seen":"2024-04-29T12:44:24.392874586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0429 12:48:12.822632    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:12.822632    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.822632    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.822632    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.825968    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:12.826380    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.826380    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.826380    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.826380    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.826380    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.826380    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.826380    3296 round_trippers.go:580]     Audit-Id: e83ce058-61e8-48a6-afb7-50c47b79607d
	I0429 12:48:12.826524    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:48:12.826700    3296 pod_ready.go:92] pod "kube-apiserver-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:12.827026    3296 pod_ready.go:81] duration metric: took 19.4424ms for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.827026    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.827147    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-409200
	I0429 12:48:12.827147    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.827147    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.827147    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.831978    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:12.831978    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.832086    3296 round_trippers.go:580]     Audit-Id: 083e5fff-f7d4-4f9e-be22-edaff55517dc
	I0429 12:48:12.832086    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.832086    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.832086    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.832086    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.832086    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.832503    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-409200","namespace":"kube-system","uid":"bc75101f-63f2-4b41-a912-4d015c4fd4aa","resourceVersion":"382","creationTimestamp":"2024-04-29T12:44:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.mirror":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.seen":"2024-04-29T12:44:32.885750739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0429 12:48:12.833774    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:12.833774    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.833774    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.833774    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.836798    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:12.836798    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.836798    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.836798    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.836798    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.836798    3296 round_trippers.go:580]     Audit-Id: e04f6a75-1171-438a-96f3-dddbe508dc2a
	I0429 12:48:12.836798    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.836798    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.836798    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:48:12.836798    3296 pod_ready.go:92] pod "kube-controller-manager-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:12.836798    3296 pod_ready.go:81] duration metric: took 9.7721ms for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.836798    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:12.974646    3296 request.go:629] Waited for 136.4057ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 12:48:12.974716    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 12:48:12.974716    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:12.974716    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:12.974793    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:12.979279    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:12.979279    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:12.979279    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:12.979279    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:12.979279    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:12.979279    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:12.979279    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:12 GMT
	I0429 12:48:12.979279    3296 round_trippers.go:580]     Audit-Id: c2f9af39-ab8b-40b6-a94a-dec9b2e14de3
	I0429 12:48:12.979829    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g2jp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"d2c926f8-0701-483c-84ae-295e7bb08fc9","resourceVersion":"375","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0429 12:48:13.176469    3296 request.go:629] Waited for 195.6901ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:13.176469    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:13.176734    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:13.176734    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:13.176734    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:13.184321    3296 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 12:48:13.184321    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:13.184321    3296 round_trippers.go:580]     Audit-Id: eb782e6b-e0d4-4880-a25e-059332928fe3
	I0429 12:48:13.184321    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:13.184321    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:13.184321    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:13.184321    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:13.184321    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:13 GMT
	I0429 12:48:13.184321    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:48:13.185604    3296 pod_ready.go:92] pod "kube-proxy-g2jp8" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:13.185604    3296 pod_ready.go:81] duration metric: took 348.8036ms for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:13.185604    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lwc65" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:13.380150    3296 request.go:629] Waited for 194.3872ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwc65
	I0429 12:48:13.380234    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwc65
	I0429 12:48:13.380438    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:13.380438    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:13.380503    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:13.384246    3296 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 12:48:13.384246    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:13.385054    3296 round_trippers.go:580]     Audit-Id: 16902323-5317-49dc-a050-1c05fbf2447d
	I0429 12:48:13.385054    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:13.385054    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:13.385054    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:13.385054    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:13.385054    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:13 GMT
	I0429 12:48:13.385189    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lwc65","generateName":"kube-proxy-","namespace":"kube-system","uid":"98e18062-2d8f-45d3-a8fa-dda098365db8","resourceVersion":"606","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0429 12:48:13.584220    3296 request.go:629] Waited for 197.0057ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:13.584358    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200-m02
	I0429 12:48:13.584358    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:13.584358    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:13.584358    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:13.588371    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:13.588371    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:13.588371    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:13.588371    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:13 GMT
	I0429 12:48:13.588371    3296 round_trippers.go:580]     Audit-Id: aad7e695-0358-4fac-97a0-89102aa3e85c
	I0429 12:48:13.588371    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:13.588371    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:13.588371    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:13.589260    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"625","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0429 12:48:13.589537    3296 pod_ready.go:92] pod "kube-proxy-lwc65" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:13.589537    3296 pod_ready.go:81] duration metric: took 403.9301ms for pod "kube-proxy-lwc65" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:13.589537    3296 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:13.786079    3296 request.go:629] Waited for 196.2715ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 12:48:13.786079    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 12:48:13.786079    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:13.786383    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:13.786383    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:13.790876    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:13.790876    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:13.790876    3296 round_trippers.go:580]     Audit-Id: 7a61fcbd-566e-4344-b176-faf124521ad5
	I0429 12:48:13.790876    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:13.790876    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:13.790876    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:13.790876    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:13.790876    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:13 GMT
	I0429 12:48:13.791284    3296 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-409200","namespace":"kube-system","uid":"6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266","resourceVersion":"379","creationTimestamp":"2024-04-29T12:44:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.mirror":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.seen":"2024-04-29T12:44:24.392867685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0429 12:48:13.974364    3296 request.go:629] Waited for 182.6101ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:13.974515    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes/multinode-409200
	I0429 12:48:13.974651    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:13.974896    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:13.974896    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:13.977839    3296 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:48:13.978533    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:13.978533    3296 round_trippers.go:580]     Audit-Id: 10150d3a-18fb-49e6-b280-e98bbb3d444b
	I0429 12:48:13.978533    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:13.978533    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:13.978607    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:13.978607    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:13.978607    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:13 GMT
	I0429 12:48:13.978855    3296 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0429 12:48:13.979415    3296 pod_ready.go:92] pod "kube-scheduler-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 12:48:13.979415    3296 pod_ready.go:81] duration metric: took 389.8741ms for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 12:48:13.979500    3296 pod_ready.go:38] duration metric: took 1.2030784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:48:13.979500    3296 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 12:48:13.992716    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:48:14.019253    3296 system_svc.go:56] duration metric: took 39.7527ms WaitForService to wait for kubelet
	I0429 12:48:14.019320    3296 kubeadm.go:576] duration metric: took 24.0450452s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:48:14.019320    3296 node_conditions.go:102] verifying NodePressure condition ...
	I0429 12:48:14.177527    3296 request.go:629] Waited for 158.0768ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.185.116:8443/api/v1/nodes
	I0429 12:48:14.177815    3296 round_trippers.go:463] GET https://172.26.185.116:8443/api/v1/nodes
	I0429 12:48:14.177815    3296 round_trippers.go:469] Request Headers:
	I0429 12:48:14.177815    3296 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:48:14.177815    3296 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 12:48:14.181881    3296 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 12:48:14.181881    3296 round_trippers.go:577] Response Headers:
	I0429 12:48:14.182639    3296 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 12:48:14.182639    3296 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 12:48:14.182639    3296 round_trippers.go:580]     Date: Mon, 29 Apr 2024 12:48:14 GMT
	I0429 12:48:14.182639    3296 round_trippers.go:580]     Audit-Id: aaa1c9b4-e781-4a89-9137-b98b7184a74c
	I0429 12:48:14.182639    3296 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 12:48:14.182747    3296 round_trippers.go:580]     Content-Type: application/json
	I0429 12:48:14.182822    3296 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"626"},"items":[{"metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"424","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9269 chars]
	I0429 12:48:14.183880    3296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:48:14.183880    3296 node_conditions.go:123] node cpu capacity is 2
	I0429 12:48:14.183880    3296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:48:14.183880    3296 node_conditions.go:123] node cpu capacity is 2
	I0429 12:48:14.183880    3296 node_conditions.go:105] duration metric: took 164.5584ms to run NodePressure ...
	I0429 12:48:14.183880    3296 start.go:240] waiting for startup goroutines ...
	I0429 12:48:14.183880    3296 start.go:254] writing updated cluster config ...
	I0429 12:48:14.198239    3296 ssh_runner.go:195] Run: rm -f paused
	I0429 12:48:14.346996    3296 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 12:48:14.350122    3296 out.go:177] * Done! kubectl is now configured to use "multinode-409200" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.635457316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.636083617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.636125217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.636297718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.736418780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.736676980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.736820981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:45:02 multinode-409200 dockerd[1329]: time="2024-04-29T12:45:02.738556985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:48:39 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:39.861076906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 12:48:39 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:39.861156406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 12:48:39 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:39.861205506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:48:39 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:39.861322806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:48:40 multinode-409200 cri-dockerd[1227]: time="2024-04-29T12:48:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3a063be2c6a2b3661cf9646e44862baf96718fcd26549482289dd884d3e11b6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 12:48:41 multinode-409200 cri-dockerd[1227]: time="2024-04-29T12:48:41Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 12:48:41 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:41.443570060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 12:48:41 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:41.443726962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 12:48:41 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:41.444350768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:48:41 multinode-409200 dockerd[1329]: time="2024-04-29T12:48:41.444618971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:49:30 multinode-409200 dockerd[1322]: 2024/04/29 12:49:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 12:49:30 multinode-409200 dockerd[1322]: 2024/04/29 12:49:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 12:49:30 multinode-409200 dockerd[1322]: 2024/04/29 12:49:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 12:49:30 multinode-409200 dockerd[1322]: 2024/04/29 12:49:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 12:49:30 multinode-409200 dockerd[1322]: 2024/04/29 12:49:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 12:49:30 multinode-409200 dockerd[1322]: 2024/04/29 12:49:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 12:49:30 multinode-409200 dockerd[1322]: 2024/04/29 12:49:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9a3d650be06c0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   16 minutes ago      Running             busybox                   0                   d3a063be2c6a2       busybox-fc5497c4f-gr44t
	98ab9c7d68851       cbb01a7bd410d                                                                                         20 minutes ago      Running             coredns                   0                   ba73c7e4d62c2       coredns-7db6d8ff4d-ctb8n
	5a03c0724371b       6e38f40d628db                                                                                         20 minutes ago      Running             storage-provisioner       0                   ea71df7098870       storage-provisioner
	caeb8f4bcea15       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              20 minutes ago      Running             kindnet-cni               0                   3792c8bbb983d       kindnet-xj48j
	3ba8caba4bc56       a0bf559e280cf                                                                                         20 minutes ago      Running             kube-proxy                0                   2d26cd85561dd       kube-proxy-g2jp8
	315326a1ce10c       259c8277fcbbc                                                                                         21 minutes ago      Running             kube-scheduler            0                   c88537851c019       kube-scheduler-multinode-409200
	390664a859132       c42f13656d0b2                                                                                         21 minutes ago      Running             kube-apiserver            0                   85aab37150a11       kube-apiserver-multinode-409200
	5adb6a9084e4b       c7aad43836fa5                                                                                         21 minutes ago      Running             kube-controller-manager   0                   19fd9c3dddd43       kube-controller-manager-multinode-409200
	030b6d42f50f9       3861cfcd7c04c                                                                                         21 minutes ago      Running             etcd                      0                   5d39391ba43b6       etcd-multinode-409200
	
	
	==> coredns [98ab9c7d6885] <==
	[INFO] 10.244.0.3:49783 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000199102s
	[INFO] 10.244.1.2:51801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218002s
	[INFO] 10.244.1.2:45305 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000112002s
	[INFO] 10.244.1.2:41116 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177102s
	[INFO] 10.244.1.2:57979 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158402s
	[INFO] 10.244.1.2:49615 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000059801s
	[INFO] 10.244.1.2:42034 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000564s
	[INFO] 10.244.1.2:59112 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133602s
	[INFO] 10.244.1.2:44817 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055401s
	[INFO] 10.244.0.3:47750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000202902s
	[INFO] 10.244.0.3:42610 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058701s
	[INFO] 10.244.0.3:48140 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094301s
	[INFO] 10.244.0.3:43769 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056701s
	[INFO] 10.244.1.2:35529 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000365104s
	[INFO] 10.244.1.2:35716 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000176402s
	[INFO] 10.244.1.2:54486 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129601s
	[INFO] 10.244.1.2:44351 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000646s
	[INFO] 10.244.0.3:53572 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267303s
	[INFO] 10.244.0.3:60447 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147901s
	[INFO] 10.244.0.3:49757 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147202s
	[INFO] 10.244.0.3:51305 - 5 "PTR IN 1.176.26.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081501s
	[INFO] 10.244.1.2:52861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175302s
	[INFO] 10.244.1.2:45137 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199102s
	[INFO] 10.244.1.2:32823 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000190002s
	[INFO] 10.244.1.2:41704 - 5 "PTR IN 1.176.26.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000061001s
	
	
	==> describe nodes <==
	Name:               multinode-409200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-409200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=multinode-409200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T12_44_34_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-409200
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:05:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:04:27 +0000   Mon, 29 Apr 2024 12:44:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:04:27 +0000   Mon, 29 Apr 2024 12:44:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:04:27 +0000   Mon, 29 Apr 2024 12:44:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:04:27 +0000   Mon, 29 Apr 2024 12:45:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.185.116
	  Hostname:    multinode-409200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d5308ef48a604eec8cefa00b64c99d59
	  System UUID:                560251d1-f442-3048-aa69-bfa1c5b44db2
	  Boot ID:                    c750a879-a407-4348-b519-0853c8e57aab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gr44t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 coredns-7db6d8ff4d-ctb8n                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-multinode-409200                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-xj48j                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-apiserver-multinode-409200             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-multinode-409200    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-g2jp8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-multinode-409200             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 20m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node multinode-409200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node multinode-409200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node multinode-409200 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m   node-controller  Node multinode-409200 event: Registered Node multinode-409200 in Controller
	  Normal  NodeReady                20m   kubelet          Node multinode-409200 status is now: NodeReady
	
	
	Name:               multinode-409200-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-409200-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=multinode-409200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_47_49_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:47:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-409200-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:05:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:04:07 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:04:07 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:04:07 +0000   Mon, 29 Apr 2024 12:47:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:04:07 +0000   Mon, 29 Apr 2024 12:48:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.183.208
	  Hostname:    multinode-409200-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d58c45a85c440c597f0a96b30e84f09
	  System UUID:                8c823ba6-3970-cc46-8a8d-d45bb5bace8c
	  Boot ID:                    40b5e515-11a3-4198-b85e-669d356ae177
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xvm2v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-svw9w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-proxy-lwc65           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x2 over 17m)  kubelet          Node multinode-409200-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x2 over 17m)  kubelet          Node multinode-409200-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x2 over 17m)  kubelet          Node multinode-409200-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node multinode-409200-m02 event: Registered Node multinode-409200-m02 in Controller
	  Normal  NodeReady                17m                kubelet          Node multinode-409200-m02 status is now: NodeReady
	
	
	Name:               multinode-409200-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-409200-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=multinode-409200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_52_38_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:52:37 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-409200-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:59:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 12:58:13 +0000   Mon, 29 Apr 2024 13:00:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 12:58:13 +0000   Mon, 29 Apr 2024 13:00:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 12:58:13 +0000   Mon, 29 Apr 2024 13:00:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 12:58:13 +0000   Mon, 29 Apr 2024 13:00:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.26.183.1
	  Hostname:    multinode-409200-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fb1f09d6927404399a9e8da87cc3dea
	  System UUID:                4609bb56-f956-874e-bb10-b85027c7b67f
	  Boot ID:                    0af6b34b-d477-4688-94f5-fcd2f3452b10
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7p265       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-bbxqg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node multinode-409200-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node multinode-409200-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node multinode-409200-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node multinode-409200-m03 event: Registered Node multinode-409200-m03 in Controller
	  Normal  NodeReady                12m                kubelet          Node multinode-409200-m03 status is now: NodeReady
	  Normal  NodeNotReady             5m1s               node-controller  Node multinode-409200-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +7.197340] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr29 12:43] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.192639] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +31.320327] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.121697] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.579483] systemd-fstab-generator[984]: Ignoring "noauto" option for root device
	[  +0.194821] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[  +0.242876] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[Apr29 12:44] systemd-fstab-generator[1180]: Ignoring "noauto" option for root device
	[  +0.202815] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[  +0.211261] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.302320] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[ +11.768479] systemd-fstab-generator[1313]: Ignoring "noauto" option for root device
	[  +0.123744] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.764600] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	[  +6.490625] systemd-fstab-generator[1707]: Ignoring "noauto" option for root device
	[  +0.131334] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.059343] systemd-fstab-generator[2119]: Ignoring "noauto" option for root device
	[  +0.134282] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.603784] systemd-fstab-generator[2313]: Ignoring "noauto" option for root device
	[  +0.252752] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.930863] kauditd_printk_skb: 51 callbacks suppressed
	[Apr29 12:48] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [030b6d42f50f] <==
	{"level":"info","ts":"2024-04-29T12:52:48.817177Z","caller":"traceutil/trace.go:171","msg":"trace[792687058] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:3; response_revision:944; }","duration":"139.225379ms","start":"2024-04-29T12:52:48.677942Z","end":"2024-04-29T12:52:48.817167Z","steps":["trace[792687058] 'range keys from in-memory index tree'  (duration: 136.552866ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T12:52:53.714562Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.17568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-409200-m03\" ","response":"range_response_count:1 size:3147"}
	{"level":"info","ts":"2024-04-29T12:52:53.714955Z","caller":"traceutil/trace.go:171","msg":"trace[1444792618] range","detail":"{range_begin:/registry/minions/multinode-409200-m03; range_end:; response_count:1; response_revision:954; }","duration":"183.606183ms","start":"2024-04-29T12:52:53.531327Z","end":"2024-04-29T12:52:53.714934Z","steps":["trace[1444792618] 'range keys from in-memory index tree'  (duration: 183.001579ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T12:54:27.123193Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":728}
	{"level":"info","ts":"2024-04-29T12:54:27.144057Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":728,"took":"20.097774ms","hash":3480846131,"current-db-size-bytes":2486272,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2486272,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-04-29T12:54:27.144146Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3480846131,"revision":728,"compact-revision":-1}
	{"level":"info","ts":"2024-04-29T12:59:27.141313Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1058}
	{"level":"info","ts":"2024-04-29T12:59:27.15069Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1058,"took":"9.053919ms","hash":1147045013,"current-db-size-bytes":2486272,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1843200,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-04-29T12:59:27.150749Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1147045013,"revision":1058,"compact-revision":728}
	{"level":"info","ts":"2024-04-29T13:00:10.472405Z","caller":"traceutil/trace.go:171","msg":"trace[448967738] transaction","detail":"{read_only:false; response_revision:1401; number_of_response:1; }","duration":"104.183401ms","start":"2024-04-29T13:00:10.368202Z","end":"2024-04-29T13:00:10.472386Z","steps":["trace[448967738] 'process raft request'  (duration: 103.9877ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T13:00:12.2775Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.285688ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7123163170697625521 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.26.185.116\" mod_revision:1393 > success:<request_put:<key:\"/registry/masterleases/172.26.185.116\" value_size:67 lease:7123163170697625518 >> failure:<request_range:<key:\"/registry/masterleases/172.26.185.116\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T13:00:12.277821Z","caller":"traceutil/trace.go:171","msg":"trace[1488564331] transaction","detail":"{read_only:false; response_revision:1402; number_of_response:1; }","duration":"359.865091ms","start":"2024-04-29T13:00:11.917857Z","end":"2024-04-29T13:00:12.277722Z","steps":["trace[1488564331] 'process raft request'  (duration: 157.261702ms)","trace[1488564331] 'compare'  (duration: 201.901287ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T13:00:12.277931Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T13:00:11.917837Z","time spent":"360.063091ms","remote":"127.0.0.1:40784","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.26.185.116\" mod_revision:1393 > success:<request_put:<key:\"/registry/masterleases/172.26.185.116\" value_size:67 lease:7123163170697625518 >> failure:<request_range:<key:\"/registry/masterleases/172.26.185.116\" > >"}
	{"level":"warn","ts":"2024-04-29T13:00:12.622546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.089274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-04-29T13:00:12.623023Z","caller":"traceutil/trace.go:171","msg":"trace[2018469111] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1402; }","duration":"143.598675ms","start":"2024-04-29T13:00:12.479405Z","end":"2024-04-29T13:00:12.623004Z","steps":["trace[2018469111] 'range keys from in-memory index tree'  (duration: 142.867974ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T13:00:12.811709Z","caller":"traceutil/trace.go:171","msg":"trace[1031135043] transaction","detail":"{read_only:false; response_revision:1403; number_of_response:1; }","duration":"183.360752ms","start":"2024-04-29T13:00:12.628327Z","end":"2024-04-29T13:00:12.811687Z","steps":["trace[1031135043] 'process raft request'  (duration: 183.147551ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T13:00:13.973272Z","caller":"traceutil/trace.go:171","msg":"trace[1964906285] linearizableReadLoop","detail":"{readStateIndex:1614; appliedIndex:1613; }","duration":"147.070382ms","start":"2024-04-29T13:00:13.826166Z","end":"2024-04-29T13:00:13.973236Z","steps":["trace[1964906285] 'read index received'  (duration: 146.712781ms)","trace[1964906285] 'applied index is now lower than readState.Index'  (duration: 357.001µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T13:00:13.973711Z","caller":"traceutil/trace.go:171","msg":"trace[1409640681] transaction","detail":"{read_only:false; response_revision:1404; number_of_response:1; }","duration":"274.130725ms","start":"2024-04-29T13:00:13.699564Z","end":"2024-04-29T13:00:13.973694Z","steps":["trace[1409640681] 'process raft request'  (duration: 273.512723ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T13:00:13.974708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.261682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T13:00:13.974812Z","caller":"traceutil/trace.go:171","msg":"trace[190992287] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1404; }","duration":"148.682885ms","start":"2024-04-29T13:00:13.826119Z","end":"2024-04-29T13:00:13.974801Z","steps":["trace[190992287] 'agreement among raft nodes before linearized reading'  (duration: 147.282882ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T13:00:14.929143Z","caller":"traceutil/trace.go:171","msg":"trace[1697694867] transaction","detail":"{read_only:false; response_revision:1405; number_of_response:1; }","duration":"106.584004ms","start":"2024-04-29T13:00:14.82254Z","end":"2024-04-29T13:00:14.929124Z","steps":["trace[1697694867] 'process raft request'  (duration: 106.386904ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T13:00:15.561377Z","caller":"traceutil/trace.go:171","msg":"trace[1456690940] transaction","detail":"{read_only:false; response_revision:1406; number_of_response:1; }","duration":"123.239435ms","start":"2024-04-29T13:00:15.438117Z","end":"2024-04-29T13:00:15.561357Z","steps":["trace[1456690940] 'process raft request'  (duration: 123.119735ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T13:04:27.162823Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1359}
	{"level":"info","ts":"2024-04-29T13:04:27.172832Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1359,"took":"9.414414ms","hash":899124302,"current-db-size-bytes":2486272,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1744896,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-04-29T13:04:27.172881Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":899124302,"revision":1359,"compact-revision":1058}
	
	
	==> kernel <==
	 13:05:37 up 23 min,  0 users,  load average: 0.31, 0.30, 0.22
	Linux multinode-409200 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [caeb8f4bcea1] <==
	I0429 13:04:47.962598       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	I0429 13:04:57.975164       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 13:04:57.975193       1 main.go:227] handling current node
	I0429 13:04:57.975205       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 13:04:57.975211       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 13:04:57.975981       1 main.go:223] Handling node with IPs: map[172.26.183.1:{}]
	I0429 13:04:57.976061       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	I0429 13:05:07.985148       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 13:05:07.985284       1 main.go:227] handling current node
	I0429 13:05:07.985300       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 13:05:07.985309       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 13:05:07.985872       1 main.go:223] Handling node with IPs: map[172.26.183.1:{}]
	I0429 13:05:07.985888       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	I0429 13:05:18.002241       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 13:05:18.002293       1 main.go:227] handling current node
	I0429 13:05:18.002316       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 13:05:18.002324       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 13:05:18.003450       1 main.go:223] Handling node with IPs: map[172.26.183.1:{}]
	I0429 13:05:18.003493       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	I0429 13:05:28.016973       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 13:05:28.017021       1 main.go:227] handling current node
	I0429 13:05:28.017034       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 13:05:28.017042       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 13:05:28.017748       1 main.go:223] Handling node with IPs: map[172.26.183.1:{}]
	I0429 13:05:28.017826       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [390664a85913] <==
	I0429 12:44:31.796178       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 12:44:32.325302       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 12:44:32.866487       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 12:44:32.926171       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 12:44:32.964615       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 12:44:46.825589       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 12:44:47.230258       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0429 12:47:42.375122       1 trace.go:236] Trace[1523062445]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.26.185.116,type:*v1.Endpoints,resource:apiServerIPInfo (29-Apr-2024 12:47:41.760) (total time: 614ms):
	Trace[1523062445]: ---"Transaction prepared" 158ms (12:47:41.920)
	Trace[1523062445]: ---"Txn call completed" 454ms (12:47:42.375)
	Trace[1523062445]: [614.898429ms] [614.898429ms] END
	E0429 12:48:44.534701       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58575: use of closed network connection
	E0429 12:48:45.098245       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58577: use of closed network connection
	E0429 12:48:45.746138       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58579: use of closed network connection
	E0429 12:48:46.297580       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58581: use of closed network connection
	E0429 12:48:46.844349       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58583: use of closed network connection
	E0429 12:48:47.384985       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58585: use of closed network connection
	E0429 12:48:48.418000       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58588: use of closed network connection
	E0429 12:48:58.947143       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58590: use of closed network connection
	E0429 12:48:59.495039       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58593: use of closed network connection
	E0429 12:49:10.043335       1 conn.go:339] Error on socket receive: read tcp 172.26.185.116:8443->172.26.176.1:58595: use of closed network connection
	I0429 12:52:42.329654       1 trace.go:236] Trace[1661920491]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.26.185.116,type:*v1.Endpoints,resource:apiServerIPInfo (29-Apr-2024 12:52:41.780) (total time: 549ms):
	Trace[1661920491]: ---"Transaction prepared" 186ms (12:52:41.968)
	Trace[1661920491]: ---"Txn call completed" 361ms (12:52:42.329)
	Trace[1661920491]: [549.112727ms] [549.112727ms] END
	
	
	==> kube-controller-manager [5adb6a9084e4] <==
	I0429 12:44:48.225494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.476922ms"
	I0429 12:44:48.261461       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.901256ms"
	I0429 12:44:48.261977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="350.603µs"
	I0429 12:45:01.593292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.901µs"
	I0429 12:45:01.625573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="248.901µs"
	I0429 12:45:03.575482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.075381ms"
	I0429 12:45:03.577737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.2µs"
	I0429 12:45:06.222594       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 12:47:49.237379       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-409200-m02\" does not exist"
	I0429 12:47:49.263216       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-409200-m02" podCIDRs=["10.244.1.0/24"]
	I0429 12:47:51.255160       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-409200-m02"
	I0429 12:48:12.497091       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-409200-m02"
	I0429 12:48:39.315624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.709457ms"
	I0429 12:48:39.348543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.825151ms"
	I0429 12:48:39.350006       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="182.599µs"
	I0429 12:48:41.641664       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.408001ms"
	I0429 12:48:41.641949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.401µs"
	I0429 12:48:41.676091       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.426762ms"
	I0429 12:48:41.676205       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.201µs"
	I0429 12:52:37.159818       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-409200-m03\" does not exist"
	I0429 12:52:37.160747       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-409200-m02"
	I0429 12:52:37.177713       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-409200-m03" podCIDRs=["10.244.2.0/24"]
	I0429 12:52:41.323171       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-409200-m03"
	I0429 12:52:56.218996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-409200-m03"
	I0429 13:00:36.459927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-409200-m02"
	
	
	==> kube-proxy [3ba8caba4bc5] <==
	I0429 12:44:49.113215       1 server_linux.go:69] "Using iptables proxy"
	I0429 12:44:49.178365       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.26.185.116"]
	I0429 12:44:49.235481       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 12:44:49.235656       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 12:44:49.235683       1 server_linux.go:165] "Using iptables Proxier"
	I0429 12:44:49.240257       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 12:44:49.243830       1 server.go:872] "Version info" version="v1.30.0"
	I0429 12:44:49.243910       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 12:44:49.247315       1 config.go:192] "Starting service config controller"
	I0429 12:44:49.248504       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 12:44:49.248691       1 config.go:101] "Starting endpoint slice config controller"
	I0429 12:44:49.248945       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 12:44:49.251257       1 config.go:319] "Starting node config controller"
	I0429 12:44:49.251298       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 12:44:49.349845       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 12:44:49.349850       1 shared_informer.go:320] Caches are synced for service config
	I0429 12:44:49.351890       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [315326a1ce10] <==
	W0429 12:44:30.427247       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 12:44:30.427377       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 12:44:30.447600       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 12:44:30.448660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 12:44:30.467546       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 12:44:30.467843       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 12:44:30.543006       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 12:44:30.543577       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 12:44:30.596529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 12:44:30.596652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 12:44:30.643354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 12:44:30.643664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 12:44:30.668341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 12:44:30.668936       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 12:44:30.756255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 12:44:30.756684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 12:44:30.842695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 12:44:30.842746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 12:44:30.878228       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 12:44:30.878284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 12:44:30.878602       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 12:44:30.878712       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 12:44:30.990384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 12:44:30.990868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0429 12:44:32.117111       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 13:01:33 multinode-409200 kubelet[2127]: E0429 13:01:33.019209    2127 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:01:33 multinode-409200 kubelet[2127]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:01:33 multinode-409200 kubelet[2127]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:01:33 multinode-409200 kubelet[2127]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:01:33 multinode-409200 kubelet[2127]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 13:02:33 multinode-409200 kubelet[2127]: E0429 13:02:33.018223    2127 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:02:33 multinode-409200 kubelet[2127]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:02:33 multinode-409200 kubelet[2127]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:02:33 multinode-409200 kubelet[2127]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:02:33 multinode-409200 kubelet[2127]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 13:03:33 multinode-409200 kubelet[2127]: E0429 13:03:33.018485    2127 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:03:33 multinode-409200 kubelet[2127]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:03:33 multinode-409200 kubelet[2127]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:03:33 multinode-409200 kubelet[2127]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:03:33 multinode-409200 kubelet[2127]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 13:04:33 multinode-409200 kubelet[2127]: E0429 13:04:33.017830    2127 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:04:33 multinode-409200 kubelet[2127]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:04:33 multinode-409200 kubelet[2127]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:04:33 multinode-409200 kubelet[2127]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:04:33 multinode-409200 kubelet[2127]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 13:05:33 multinode-409200 kubelet[2127]: E0429 13:05:33.018346    2127 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:05:33 multinode-409200 kubelet[2127]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:05:33 multinode-409200 kubelet[2127]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:05:33 multinode-409200 kubelet[2127]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:05:33 multinode-409200 kubelet[2127]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 13:05:29.427891    6228 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-409200 -n multinode-409200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-409200 -n multinode-409200: (12.9297102s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-409200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (286.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (366.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-409200
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-409200
E0429 13:06:27.495052    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 13:07:24.777484    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-409200: (2m31.8192623s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-409200 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-409200 --wait=true -v=8 --alsologtostderr: exit status 1 (2m58.4159515s)

                                                
                                                
-- stdout --
	* [multinode-409200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-409200" primary control-plane node in "multinode-409200" cluster
	* Restarting existing hyperv VM for "multinode-409200" ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-409200-m02" worker node in "multinode-409200" cluster
	* Restarting existing hyperv VM for "multinode-409200-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 13:08:25.658407   14008 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 13:08:25.761784   14008 out.go:291] Setting OutFile to fd 1560 ...
	I0429 13:08:25.761784   14008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:08:25.761784   14008 out.go:304] Setting ErrFile to fd 1592...
	I0429 13:08:25.761784   14008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:08:25.791908   14008 out.go:298] Setting JSON to false
	I0429 13:08:25.796369   14008 start.go:129] hostinfo: {"hostname":"minikube6","uptime":37578,"bootTime":1714358527,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 13:08:25.796369   14008 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 13:08:25.925859   14008 out.go:177] * [multinode-409200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 13:08:26.027798   14008 notify.go:220] Checking for updates...
	I0429 13:08:26.129954   14008 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 13:08:26.271046   14008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 13:08:26.413859   14008 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 13:08:26.635759   14008 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 13:08:26.770158   14008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 13:08:26.820621   14008 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:08:26.820621   14008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 13:08:32.684693   14008 out.go:177] * Using the hyperv driver based on existing profile
	I0429 13:08:32.784878   14008 start.go:297] selected driver: hyperv
	I0429 13:08:32.784878   14008 start.go:901] validating driver "hyperv" against &{Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.26.181.104 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:08:32.784878   14008 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 13:08:32.852889   14008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 13:08:32.853124   14008 cni.go:84] Creating CNI manager for ""
	I0429 13:08:32.853124   14008 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 13:08:32.853392   14008 start.go:340] cluster config:
	{Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.26.181.104 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:08:32.853753   14008 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:08:32.877592   14008 out.go:177] * Starting "multinode-409200" primary control-plane node in "multinode-409200" cluster
	I0429 13:08:32.959876   14008 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 13:08:32.960617   14008 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 13:08:32.960699   14008 cache.go:56] Caching tarball of preloaded images
	I0429 13:08:32.960986   14008 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 13:08:32.961281   14008 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 13:08:32.961705   14008 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 13:08:32.965509   14008 start.go:360] acquireMachinesLock for multinode-409200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 13:08:32.965834   14008 start.go:364] duration metric: took 93.9µs to acquireMachinesLock for "multinode-409200"
	I0429 13:08:32.965963   14008 start.go:96] Skipping create...Using existing machine configuration
	I0429 13:08:32.966056   14008 fix.go:54] fixHost starting: 
	I0429 13:08:32.966295   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:08:35.788000   14008 main.go:141] libmachine: [stdout =====>] : Off
	
	I0429 13:08:35.788000   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:35.788000   14008 fix.go:112] recreateIfNeeded on multinode-409200: state=Stopped err=<nil>
	W0429 13:08:35.788000   14008 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 13:08:35.795978   14008 out.go:177] * Restarting existing hyperv VM for "multinode-409200" ...
	I0429 13:08:35.798552   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-409200
	I0429 13:08:39.042010   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:08:39.042010   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:39.042194   14008 main.go:141] libmachine: Waiting for host to start...
	I0429 13:08:39.042251   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:08:41.382182   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:08:41.382182   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:41.382182   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:08:44.011346   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:08:44.011346   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:45.015181   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:08:47.322916   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:08:47.322916   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:47.323178   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:08:50.059132   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:08:50.059132   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:51.069218   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:08:53.360106   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:08:53.361130   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:53.361130   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:08:56.064919   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:08:56.065338   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:57.071277   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:08:59.340750   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:08:59.340750   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:59.340750   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:01.956175   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:09:01.956175   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:02.957308   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:05.219018   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:05.219018   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:05.219585   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:07.896792   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:07.896792   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:07.900478   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:10.111442   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:10.111442   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:10.111442   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:12.823053   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:12.823449   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:12.823724   14008 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 13:09:12.826581   14008 machine.go:94] provisionDockerMachine start ...
	I0429 13:09:12.826826   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:15.095780   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:15.095780   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:15.096632   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:17.746773   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:17.747601   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:17.753789   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:09:17.754442   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:09:17.754442   14008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 13:09:17.905063   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 13:09:17.905063   14008 buildroot.go:166] provisioning hostname "multinode-409200"
	I0429 13:09:17.905596   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:20.135330   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:20.135330   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:20.135930   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:22.816213   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:22.816213   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:22.823408   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:09:22.823601   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:09:22.823601   14008 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-409200 && echo "multinode-409200" | sudo tee /etc/hostname
	I0429 13:09:23.011604   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-409200
	
	I0429 13:09:23.011604   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:25.191924   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:25.193006   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:25.193122   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:27.891715   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:27.891715   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:27.897717   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:09:27.898303   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:09:27.898470   14008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-409200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-409200/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-409200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 13:09:28.063541   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:09:28.063541   14008 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 13:09:28.063541   14008 buildroot.go:174] setting up certificates
	I0429 13:09:28.063541   14008 provision.go:84] configureAuth start
	I0429 13:09:28.064075   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:30.271497   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:30.271497   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:30.272145   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:32.931304   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:32.931559   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:32.931661   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:35.138250   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:35.138954   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:35.138954   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:37.771701   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:37.772390   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:37.772390   14008 provision.go:143] copyHostCerts
	I0429 13:09:37.772619   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 13:09:37.772914   14008 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 13:09:37.772992   14008 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 13:09:37.773470   14008 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 13:09:37.774674   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 13:09:37.774940   14008 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 13:09:37.774940   14008 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 13:09:37.775350   14008 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 13:09:37.776466   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 13:09:37.776813   14008 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 13:09:37.776813   14008 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 13:09:37.776813   14008 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 13:09:37.777791   14008 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-409200 san=[127.0.0.1 172.26.179.21 localhost minikube multinode-409200]
	I0429 13:09:37.999208   14008 provision.go:177] copyRemoteCerts
	I0429 13:09:38.014017   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 13:09:38.014017   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:40.288292   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:40.288292   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:40.289423   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:42.972024   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:42.972783   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:42.973426   14008 sshutil.go:53] new ssh client: &{IP:172.26.179.21 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 13:09:43.106746   14008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0925688s)
	I0429 13:09:43.106746   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 13:09:43.107222   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 13:09:43.160595   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 13:09:43.161126   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 13:09:43.223841   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 13:09:43.223841   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 13:09:43.280305   14008 provision.go:87] duration metric: took 15.2160875s to configureAuth
	I0429 13:09:43.280404   14008 buildroot.go:189] setting minikube options for container-runtime
	I0429 13:09:43.281129   14008 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:09:43.281129   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:45.483597   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:45.484214   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:45.484214   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:48.174925   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:48.174925   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:48.182082   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:09:48.182082   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:09:48.182082   14008 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 13:09:48.326663   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 13:09:48.326663   14008 buildroot.go:70] root file system type: tmpfs
	I0429 13:09:48.326927   14008 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 13:09:48.326927   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:50.516696   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:50.517223   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:50.517317   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:53.203921   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:53.204493   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:53.212208   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:09:53.212835   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:09:53.212835   14008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 13:09:53.379334   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 13:09:53.379334   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:55.511866   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:55.511866   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:55.511866   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:58.139772   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:58.139772   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:58.146666   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:09:58.147237   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:09:58.147314   14008 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 13:10:00.796240   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 13:10:00.796240   14008 machine.go:97] duration metric: took 47.9692944s to provisionDockerMachine
	I0429 13:10:00.796351   14008 start.go:293] postStartSetup for "multinode-409200" (driver="hyperv")
	I0429 13:10:00.796351   14008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 13:10:00.810733   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 13:10:00.811698   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:10:02.973540   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:10:02.973540   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:02.974257   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:10:05.664930   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:10:05.664930   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:05.666232   14008 sshutil.go:53] new ssh client: &{IP:172.26.179.21 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 13:10:05.784286   14008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9725501s)
	I0429 13:10:05.801660   14008 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 13:10:05.810254   14008 command_runner.go:130] > NAME=Buildroot
	I0429 13:10:05.810254   14008 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 13:10:05.810254   14008 command_runner.go:130] > ID=buildroot
	I0429 13:10:05.810254   14008 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 13:10:05.810254   14008 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 13:10:05.810254   14008 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 13:10:05.810537   14008 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 13:10:05.811074   14008 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 13:10:05.813448   14008 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 13:10:05.813448   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 13:10:05.829733   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 13:10:05.853670   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 13:10:05.912063   14008 start.go:296] duration metric: took 5.1156729s for postStartSetup
	I0429 13:10:05.912196   14008 fix.go:56] duration metric: took 1m32.9455259s for fixHost
	I0429 13:10:05.912312   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:10:08.096551   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:10:08.096551   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:08.096551   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:10:10.747445   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:10:10.747445   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:10.757920   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:10:10.757920   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:10:10.757920   14008 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 13:10:10.912573   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714396210.915243025
	
	I0429 13:10:10.912573   14008 fix.go:216] guest clock: 1714396210.915243025
	I0429 13:10:10.912573   14008 fix.go:229] Guest: 2024-04-29 13:10:10.915243025 +0000 UTC Remote: 2024-04-29 13:10:05.912239 +0000 UTC m=+100.367905601 (delta=5.003004025s)
	I0429 13:10:10.912797   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:10:13.084036   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:10:13.084036   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:13.084853   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:10:15.775768   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:10:15.776165   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:15.782500   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:10:15.782641   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:10:15.782641   14008 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714396210
	I0429 13:10:15.945111   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 13:10:10 UTC 2024
	
	I0429 13:10:15.945111   14008 fix.go:236] clock set: Mon Apr 29 13:10:10 UTC 2024
	 (err=<nil>)
	I0429 13:10:15.945111   14008 start.go:83] releasing machines lock for "multinode-409200", held for 1m42.9784947s
	I0429 13:10:15.945111   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:10:18.153880   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:10:18.154498   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:18.154498   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:10:20.781121   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:10:20.781121   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:20.787293   14008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 13:10:20.787402   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:10:20.797785   14008 ssh_runner.go:195] Run: cat /version.json
	I0429 13:10:20.797785   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:10:23.030229   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:10:23.030229   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:23.030229   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:10:23.041925   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:10:23.041925   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:23.041925   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:10:25.805087   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:10:25.805087   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:25.805636   14008 sshutil.go:53] new ssh client: &{IP:172.26.179.21 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 13:10:25.834229   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:10:25.834513   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:25.834698   14008 sshutil.go:53] new ssh client: &{IP:172.26.179.21 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 13:10:25.912883   14008 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 13:10:25.912883   14008 ssh_runner.go:235] Completed: cat /version.json: (5.1150588s)
	I0429 13:10:25.926120   14008 ssh_runner.go:195] Run: systemctl --version
	I0429 13:10:26.038845   14008 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 13:10:26.038936   14008 command_runner.go:130] > systemd 252 (252)
	I0429 13:10:26.039017   14008 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 13:10:26.039111   14008 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2517537s)
	I0429 13:10:26.052643   14008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 13:10:26.061649   14008 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 13:10:26.062700   14008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 13:10:26.078783   14008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 13:10:26.111328   14008 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 13:10:26.111328   14008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 13:10:26.111328   14008 start.go:494] detecting cgroup driver to use...
	I0429 13:10:26.111328   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:10:26.146411   14008 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 13:10:26.162573   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 13:10:26.201194   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 13:10:26.225029   14008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 13:10:26.239018   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 13:10:26.273983   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 13:10:26.311530   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 13:10:26.350470   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 13:10:26.387122   14008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 13:10:26.421028   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 13:10:26.458166   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 13:10:26.493411   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 13:10:26.528887   14008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 13:10:26.549089   14008 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 13:10:26.564803   14008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 13:10:26.600072   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:10:26.834923   14008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 13:10:26.881179   14008 start.go:494] detecting cgroup driver to use...
	I0429 13:10:26.896666   14008 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 13:10:26.930303   14008 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 13:10:26.930303   14008 command_runner.go:130] > [Unit]
	I0429 13:10:26.930303   14008 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 13:10:26.930303   14008 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 13:10:26.930303   14008 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 13:10:26.930303   14008 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 13:10:26.930303   14008 command_runner.go:130] > StartLimitBurst=3
	I0429 13:10:26.930303   14008 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 13:10:26.930303   14008 command_runner.go:130] > [Service]
	I0429 13:10:26.930303   14008 command_runner.go:130] > Type=notify
	I0429 13:10:26.930303   14008 command_runner.go:130] > Restart=on-failure
	I0429 13:10:26.930303   14008 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 13:10:26.930303   14008 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 13:10:26.930303   14008 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 13:10:26.930303   14008 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 13:10:26.930303   14008 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 13:10:26.930303   14008 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 13:10:26.930303   14008 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 13:10:26.930931   14008 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 13:10:26.930931   14008 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 13:10:26.931014   14008 command_runner.go:130] > ExecStart=
	I0429 13:10:26.931069   14008 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 13:10:26.931167   14008 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 13:10:26.931254   14008 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 13:10:26.931254   14008 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 13:10:26.931324   14008 command_runner.go:130] > LimitNOFILE=infinity
	I0429 13:10:26.931324   14008 command_runner.go:130] > LimitNPROC=infinity
	I0429 13:10:26.931324   14008 command_runner.go:130] > LimitCORE=infinity
	I0429 13:10:26.931390   14008 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 13:10:26.931390   14008 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 13:10:26.931451   14008 command_runner.go:130] > TasksMax=infinity
	I0429 13:10:26.931451   14008 command_runner.go:130] > TimeoutStartSec=0
	I0429 13:10:26.931451   14008 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 13:10:26.931518   14008 command_runner.go:130] > Delegate=yes
	I0429 13:10:26.931518   14008 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 13:10:26.931518   14008 command_runner.go:130] > KillMode=process
	I0429 13:10:26.931579   14008 command_runner.go:130] > [Install]
	I0429 13:10:26.931579   14008 command_runner.go:130] > WantedBy=multi-user.target
	I0429 13:10:26.948663   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:10:26.994506   14008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 13:10:27.046758   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:10:27.094029   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 13:10:27.139164   14008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 13:10:27.216717   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 13:10:27.247422   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:10:27.291788   14008 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 13:10:27.306669   14008 ssh_runner.go:195] Run: which cri-dockerd
	I0429 13:10:27.314373   14008 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 13:10:27.328161   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 13:10:27.349063   14008 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 13:10:27.403749   14008 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 13:10:27.634501   14008 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 13:10:27.859745   14008 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 13:10:27.860077   14008 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 13:10:27.910023   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:10:28.139535   14008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 13:10:30.902242   14008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7626857s)
	I0429 13:10:30.916749   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 13:10:30.959921   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 13:10:30.997388   14008 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 13:10:31.238328   14008 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 13:10:31.467162   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:10:31.697642   14008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 13:10:31.743692   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 13:10:31.782725   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:10:32.005975   14008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 13:10:32.140481   14008 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 13:10:32.154151   14008 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 13:10:32.164090   14008 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 13:10:32.164090   14008 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 13:10:32.164090   14008 command_runner.go:130] > Device: 0,22	Inode: 844         Links: 1
	I0429 13:10:32.164090   14008 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 13:10:32.164090   14008 command_runner.go:130] > Access: 2024-04-29 13:10:32.042556180 +0000
	I0429 13:10:32.164090   14008 command_runner.go:130] > Modify: 2024-04-29 13:10:32.042556180 +0000
	I0429 13:10:32.164090   14008 command_runner.go:130] > Change: 2024-04-29 13:10:32.047556176 +0000
	I0429 13:10:32.164090   14008 command_runner.go:130] >  Birth: -
	I0429 13:10:32.164090   14008 start.go:562] Will wait 60s for crictl version
	I0429 13:10:32.176665   14008 ssh_runner.go:195] Run: which crictl
	I0429 13:10:32.182667   14008 command_runner.go:130] > /usr/bin/crictl
	I0429 13:10:32.197313   14008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 13:10:32.261405   14008 command_runner.go:130] > Version:  0.1.0
	I0429 13:10:32.261405   14008 command_runner.go:130] > RuntimeName:  docker
	I0429 13:10:32.261405   14008 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 13:10:32.261405   14008 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 13:10:32.261405   14008 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 13:10:32.270999   14008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 13:10:32.307247   14008 command_runner.go:130] > 26.0.2
	I0429 13:10:32.319421   14008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 13:10:32.356661   14008 command_runner.go:130] > 26.0.2
	I0429 13:10:32.362190   14008 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 13:10:32.362602   14008 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 13:10:32.367164   14008 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 13:10:32.367742   14008 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 13:10:32.367742   14008 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 13:10:32.367742   14008 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:da:8e:53 Flags:up|broadcast|multicast|running}
	I0429 13:10:32.371536   14008 ip.go:210] interface addr: fe80::e4d4:6d70:21fb:68f3/64
	I0429 13:10:32.371565   14008 ip.go:210] interface addr: 172.26.176.1/20
	I0429 13:10:32.383828   14008 ssh_runner.go:195] Run: grep 172.26.176.1	host.minikube.internal$ /etc/hosts
	I0429 13:10:32.392642   14008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:10:32.419538   14008 kubeadm.go:877] updating cluster {Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.21 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.26.181.104 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 13:10:32.419782   14008 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 13:10:32.430909   14008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 13:10:32.458204   14008 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:10:32.458565   14008 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:10:32.458565   14008 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:10:32.458565   14008 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:10:32.458565   14008 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 13:10:32.458565   14008 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0429 13:10:32.458565   14008 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:10:32.458565   14008 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 13:10:32.458565   14008 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:10:32.458565   14008 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0429 13:10:32.458848   14008 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0429 13:10:32.458848   14008 docker.go:615] Images already preloaded, skipping extraction
	I0429 13:10:32.472906   14008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 13:10:32.497667   14008 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:10:32.497667   14008 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:10:32.497667   14008 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:10:32.497667   14008 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:10:32.497667   14008 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 13:10:32.497751   14008 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0429 13:10:32.497751   14008 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:10:32.497751   14008 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 13:10:32.497751   14008 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:10:32.497751   14008 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0429 13:10:32.497879   14008 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0429 13:10:32.498004   14008 cache_images.go:84] Images are preloaded, skipping loading
	I0429 13:10:32.498004   14008 kubeadm.go:928] updating node { 172.26.179.21 8443 v1.30.0 docker true true} ...
	I0429 13:10:32.498150   14008 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-409200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.179.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 13:10:32.509222   14008 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 13:10:32.543139   14008 command_runner.go:130] > cgroupfs
	I0429 13:10:32.543341   14008 cni.go:84] Creating CNI manager for ""
	I0429 13:10:32.543341   14008 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 13:10:32.543341   14008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 13:10:32.543341   14008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.179.21 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-409200 NodeName:multinode-409200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.179.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.179.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 13:10:32.543341   14008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.179.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-409200"
	  kubeletExtraArgs:
	    node-ip: 172.26.179.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.179.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 13:10:32.558249   14008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 13:10:32.578764   14008 command_runner.go:130] > kubeadm
	I0429 13:10:32.578764   14008 command_runner.go:130] > kubectl
	I0429 13:10:32.578764   14008 command_runner.go:130] > kubelet
	I0429 13:10:32.578764   14008 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 13:10:32.593717   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 13:10:32.618298   14008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 13:10:32.654208   14008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 13:10:32.691118   14008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 13:10:32.749763   14008 ssh_runner.go:195] Run: grep 172.26.179.21	control-plane.minikube.internal$ /etc/hosts
	I0429 13:10:32.756903   14008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.179.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:10:32.794045   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:10:33.010838   14008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:10:33.043306   14008 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200 for IP: 172.26.179.21
	I0429 13:10:33.043306   14008 certs.go:194] generating shared ca certs ...
	I0429 13:10:33.043404   14008 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:10:33.044260   14008 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 13:10:33.044594   14008 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 13:10:33.044777   14008 certs.go:256] generating profile certs ...
	I0429 13:10:33.045613   14008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.key
	I0429 13:10:33.045774   14008 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.2dc65918
	I0429 13:10:33.045835   14008 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.2dc65918 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.26.179.21]
	I0429 13:10:33.772814   14008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.2dc65918 ...
	I0429 13:10:33.772814   14008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.2dc65918: {Name:mkc683afb0b6b1567608b8dec0da29a4359533c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:10:33.774811   14008 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.2dc65918 ...
	I0429 13:10:33.774811   14008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.2dc65918: {Name:mk75928da1c49eef78614e437525c498adb354d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:10:33.775207   14008 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.2dc65918 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt
	I0429 13:10:33.790283   14008 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.2dc65918 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key
	I0429 13:10:33.792365   14008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key
	I0429 13:10:33.792465   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 13:10:33.792674   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 13:10:33.793000   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 13:10:33.793362   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 13:10:33.793667   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 13:10:33.793971   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 13:10:33.794200   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 13:10:33.794452   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 13:10:33.795479   14008 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem (1338 bytes)
	W0429 13:10:33.795963   14008 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496_empty.pem, impossibly tiny 0 bytes
	I0429 13:10:33.796141   14008 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0429 13:10:33.796570   14008 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 13:10:33.796858   14008 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 13:10:33.797230   14008 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 13:10:33.797621   14008 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem (1708 bytes)
	I0429 13:10:33.797956   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /usr/share/ca-certificates/84962.pem
	I0429 13:10:33.798205   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:10:33.798428   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem -> /usr/share/ca-certificates/8496.pem
	I0429 13:10:33.799665   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 13:10:33.853915   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 13:10:33.907546   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 13:10:33.957137   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 13:10:34.012901   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 13:10:34.071279   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 13:10:34.134340   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 13:10:34.187889   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 13:10:34.243370   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /usr/share/ca-certificates/84962.pem (1708 bytes)
	I0429 13:10:34.314118   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 13:10:34.363407   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem --> /usr/share/ca-certificates/8496.pem (1338 bytes)
	I0429 13:10:34.415530   14008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 13:10:34.472214   14008 ssh_runner.go:195] Run: openssl version
	I0429 13:10:34.481400   14008 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 13:10:34.495705   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84962.pem && ln -fs /usr/share/ca-certificates/84962.pem /etc/ssl/certs/84962.pem"
	I0429 13:10:34.532066   14008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84962.pem
	I0429 13:10:34.539647   14008 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 13:10:34.539767   14008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 13:10:34.552197   14008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84962.pem
	I0429 13:10:34.562004   14008 command_runner.go:130] > 3ec20f2e
	I0429 13:10:34.577762   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84962.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 13:10:34.613665   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 13:10:34.646810   14008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:10:34.654603   14008 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:10:34.654603   14008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:10:34.669219   14008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:10:34.679324   14008 command_runner.go:130] > b5213941
	I0429 13:10:34.691573   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 13:10:34.729431   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8496.pem && ln -fs /usr/share/ca-certificates/8496.pem /etc/ssl/certs/8496.pem"
	I0429 13:10:34.773909   14008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8496.pem
	I0429 13:10:34.782813   14008 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 13:10:34.782813   14008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 13:10:34.798778   14008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8496.pem
	I0429 13:10:34.809208   14008 command_runner.go:130] > 51391683
	I0429 13:10:34.823093   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8496.pem /etc/ssl/certs/51391683.0"
	I0429 13:10:34.858902   14008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:10:34.866824   14008 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:10:34.866824   14008 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0429 13:10:34.866824   14008 command_runner.go:130] > Device: 8,1	Inode: 4196178     Links: 1
	I0429 13:10:34.866824   14008 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 13:10:34.866936   14008 command_runner.go:130] > Access: 2024-04-29 12:44:20.371014084 +0000
	I0429 13:10:34.866984   14008 command_runner.go:130] > Modify: 2024-04-29 12:44:20.371014084 +0000
	I0429 13:10:34.866984   14008 command_runner.go:130] > Change: 2024-04-29 12:44:20.371014084 +0000
	I0429 13:10:34.866984   14008 command_runner.go:130] >  Birth: 2024-04-29 12:44:20.371014084 +0000
	I0429 13:10:34.880826   14008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 13:10:34.890997   14008 command_runner.go:130] > Certificate will not expire
	I0429 13:10:34.905146   14008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 13:10:34.918040   14008 command_runner.go:130] > Certificate will not expire
	I0429 13:10:34.934613   14008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 13:10:34.946534   14008 command_runner.go:130] > Certificate will not expire
	I0429 13:10:34.961633   14008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 13:10:34.971329   14008 command_runner.go:130] > Certificate will not expire
	I0429 13:10:34.988042   14008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 13:10:34.997631   14008 command_runner.go:130] > Certificate will not expire
	I0429 13:10:35.012167   14008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 13:10:35.025130   14008 command_runner.go:130] > Certificate will not expire
	I0429 13:10:35.025696   14008 kubeadm.go:391] StartCluster: {Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.21 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.26.181.104 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:10:35.039648   14008 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 13:10:35.079237   14008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 13:10:35.101169   14008 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0429 13:10:35.101169   14008 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0429 13:10:35.101169   14008 command_runner.go:130] > /var/lib/minikube/etcd:
	I0429 13:10:35.101169   14008 command_runner.go:130] > member
	W0429 13:10:35.101169   14008 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 13:10:35.101169   14008 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 13:10:35.101169   14008 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 13:10:35.114340   14008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 13:10:35.134421   14008 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 13:10:35.135942   14008 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-409200" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 13:10:35.136677   14008 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-409200" cluster setting kubeconfig missing "multinode-409200" context setting]
	I0429 13:10:35.137432   14008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:10:35.154685   14008 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 13:10:35.155482   14008 kapi.go:59] client config for multinode-409200: &rest.Config{Host:"https://172.26.179.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 13:10:35.156905   14008 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 13:10:35.169839   14008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 13:10:35.192119   14008 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0429 13:10:35.192119   14008 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0429 13:10:35.192119   14008 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0429 13:10:35.192119   14008 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0429 13:10:35.192119   14008 command_runner.go:130] >  kind: InitConfiguration
	I0429 13:10:35.192119   14008 command_runner.go:130] >  localAPIEndpoint:
	I0429 13:10:35.192119   14008 command_runner.go:130] > -  advertiseAddress: 172.26.185.116
	I0429 13:10:35.192119   14008 command_runner.go:130] > +  advertiseAddress: 172.26.179.21
	I0429 13:10:35.192119   14008 command_runner.go:130] >    bindPort: 8443
	I0429 13:10:35.192119   14008 command_runner.go:130] >  bootstrapTokens:
	I0429 13:10:35.192119   14008 command_runner.go:130] >    - groups:
	I0429 13:10:35.192119   14008 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0429 13:10:35.192119   14008 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0429 13:10:35.192119   14008 command_runner.go:130] >    name: "multinode-409200"
	I0429 13:10:35.192119   14008 command_runner.go:130] >    kubeletExtraArgs:
	I0429 13:10:35.192119   14008 command_runner.go:130] > -    node-ip: 172.26.185.116
	I0429 13:10:35.192119   14008 command_runner.go:130] > +    node-ip: 172.26.179.21
	I0429 13:10:35.192119   14008 command_runner.go:130] >    taints: []
	I0429 13:10:35.192119   14008 command_runner.go:130] >  ---
	I0429 13:10:35.192119   14008 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0429 13:10:35.192119   14008 command_runner.go:130] >  kind: ClusterConfiguration
	I0429 13:10:35.192119   14008 command_runner.go:130] >  apiServer:
	I0429 13:10:35.192119   14008 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.26.185.116"]
	I0429 13:10:35.192119   14008 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.26.179.21"]
	I0429 13:10:35.192119   14008 command_runner.go:130] >    extraArgs:
	I0429 13:10:35.192119   14008 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0429 13:10:35.192119   14008 command_runner.go:130] >  controllerManager:
	I0429 13:10:35.192119   14008 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.26.185.116
	+  advertiseAddress: 172.26.179.21
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-409200"
	   kubeletExtraArgs:
	-    node-ip: 172.26.185.116
	+    node-ip: 172.26.179.21
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.26.185.116"]
	+  certSANs: ["127.0.0.1", "localhost", "172.26.179.21"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0429 13:10:35.192119   14008 kubeadm.go:1154] stopping kube-system containers ...
	I0429 13:10:35.203446   14008 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 13:10:35.238894   14008 command_runner.go:130] > 98ab9c7d6885
	I0429 13:10:35.238945   14008 command_runner.go:130] > 5a03c0724371
	I0429 13:10:35.238945   14008 command_runner.go:130] > ea71df709887
	I0429 13:10:35.238945   14008 command_runner.go:130] > ba73c7e4d62c
	I0429 13:10:35.239038   14008 command_runner.go:130] > caeb8f4bcea1
	I0429 13:10:35.239038   14008 command_runner.go:130] > 3ba8caba4bc5
	I0429 13:10:35.239038   14008 command_runner.go:130] > 3792c8bbb983
	I0429 13:10:35.239038   14008 command_runner.go:130] > 2d26cd85561d
	I0429 13:10:35.239038   14008 command_runner.go:130] > 315326a1ce10
	I0429 13:10:35.239038   14008 command_runner.go:130] > 390664a85913
	I0429 13:10:35.239038   14008 command_runner.go:130] > 5adb6a9084e4
	I0429 13:10:35.239038   14008 command_runner.go:130] > 030b6d42f50f
	I0429 13:10:35.239038   14008 command_runner.go:130] > 19fd9c3dddd4
	I0429 13:10:35.239038   14008 command_runner.go:130] > 85aab37150a1
	I0429 13:10:35.239133   14008 command_runner.go:130] > c88537851c01
	I0429 13:10:35.239133   14008 command_runner.go:130] > 5d39391ba43b
	I0429 13:10:35.239210   14008 docker.go:483] Stopping containers: [98ab9c7d6885 5a03c0724371 ea71df709887 ba73c7e4d62c caeb8f4bcea1 3ba8caba4bc5 3792c8bbb983 2d26cd85561d 315326a1ce10 390664a85913 5adb6a9084e4 030b6d42f50f 19fd9c3dddd4 85aab37150a1 c88537851c01 5d39391ba43b]
	I0429 13:10:35.250508   14008 ssh_runner.go:195] Run: docker stop 98ab9c7d6885 5a03c0724371 ea71df709887 ba73c7e4d62c caeb8f4bcea1 3ba8caba4bc5 3792c8bbb983 2d26cd85561d 315326a1ce10 390664a85913 5adb6a9084e4 030b6d42f50f 19fd9c3dddd4 85aab37150a1 c88537851c01 5d39391ba43b
	I0429 13:10:35.284989   14008 command_runner.go:130] > 98ab9c7d6885
	I0429 13:10:35.284989   14008 command_runner.go:130] > 5a03c0724371
	I0429 13:10:35.284989   14008 command_runner.go:130] > ea71df709887
	I0429 13:10:35.284989   14008 command_runner.go:130] > ba73c7e4d62c
	I0429 13:10:35.284989   14008 command_runner.go:130] > caeb8f4bcea1
	I0429 13:10:35.284989   14008 command_runner.go:130] > 3ba8caba4bc5
	I0429 13:10:35.284989   14008 command_runner.go:130] > 3792c8bbb983
	I0429 13:10:35.284989   14008 command_runner.go:130] > 2d26cd85561d
	I0429 13:10:35.284989   14008 command_runner.go:130] > 315326a1ce10
	I0429 13:10:35.285162   14008 command_runner.go:130] > 390664a85913
	I0429 13:10:35.285162   14008 command_runner.go:130] > 5adb6a9084e4
	I0429 13:10:35.285162   14008 command_runner.go:130] > 030b6d42f50f
	I0429 13:10:35.285162   14008 command_runner.go:130] > 19fd9c3dddd4
	I0429 13:10:35.285162   14008 command_runner.go:130] > 85aab37150a1
	I0429 13:10:35.285162   14008 command_runner.go:130] > c88537851c01
	I0429 13:10:35.285162   14008 command_runner.go:130] > 5d39391ba43b
	I0429 13:10:35.303987   14008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 13:10:35.352160   14008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 13:10:35.372667   14008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0429 13:10:35.372667   14008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0429 13:10:35.373122   14008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0429 13:10:35.373122   14008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 13:10:35.373170   14008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 13:10:35.373170   14008 kubeadm.go:156] found existing configuration files:
	
	I0429 13:10:35.388941   14008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 13:10:35.410202   14008 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 13:10:35.411187   14008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 13:10:35.425094   14008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 13:10:35.471107   14008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 13:10:35.491287   14008 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 13:10:35.491389   14008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 13:10:35.504136   14008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 13:10:35.545372   14008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 13:10:35.564523   14008 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 13:10:35.565036   14008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 13:10:35.579140   14008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 13:10:35.612019   14008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 13:10:35.630364   14008 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 13:10:35.631398   14008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 13:10:35.644560   14008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 13:10:35.687811   14008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 13:10:35.717674   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 13:10:36.024096   14008 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0429 13:10:36.024363   14008 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 13:10:36.024363   14008 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 13:10:36.024363   14008 command_runner.go:130] > [certs] Using the existing "sa" key
	I0429 13:10:36.024363   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 13:10:38.048832   14008 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 13:10:38.048832   14008 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 13:10:38.048832   14008 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 13:10:38.048832   14008 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 13:10:38.048832   14008 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 13:10:38.048832   14008 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 13:10:38.048832   14008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.0243867s)
	I0429 13:10:38.048832   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 13:10:38.406873   14008 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 13:10:38.406873   14008 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 13:10:38.406873   14008 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 13:10:38.406873   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 13:10:38.518414   14008 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 13:10:38.518414   14008 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 13:10:38.518414   14008 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 13:10:38.518414   14008 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 13:10:38.518414   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 13:10:38.669414   14008 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 13:10:38.669414   14008 api_server.go:52] waiting for apiserver process to appear ...
	I0429 13:10:38.681426   14008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:10:39.190142   14008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:10:39.684892   14008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:10:40.196352   14008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:10:40.695154   14008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:10:40.729155   14008 command_runner.go:130] > 1888
	I0429 13:10:40.729853   14008 api_server.go:72] duration metric: took 2.060423s to wait for apiserver process to appear ...
	I0429 13:10:40.729853   14008 api_server.go:88] waiting for apiserver healthz status ...
	I0429 13:10:40.729960   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:44.636826   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 13:10:44.636826   14008 api_server.go:103] status: https://172.26.179.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 13:10:44.637625   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:44.662165   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 13:10:44.662621   14008 api_server.go:103] status: https://172.26.179.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 13:10:44.740344   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:44.756261   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 13:10:44.756261   14008 api_server.go:103] status: https://172.26.179.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 13:10:45.235230   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:45.245177   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 13:10:45.245177   14008 api_server.go:103] status: https://172.26.179.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 13:10:45.738839   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:45.774814   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 13:10:45.774814   14008 api_server.go:103] status: https://172.26.179.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 13:10:46.232514   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:46.249468   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 13:10:46.249468   14008 api_server.go:103] status: https://172.26.179.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 13:10:46.741217   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:46.761123   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 200:
	ok
	I0429 13:10:46.761123   14008 round_trippers.go:463] GET https://172.26.179.21:8443/version
	I0429 13:10:46.761123   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:46.761123   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:46.761123   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:46.775263   14008 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 13:10:46.775894   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:46.775894   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:46 GMT
	I0429 13:10:46.775894   14008 round_trippers.go:580]     Audit-Id: b5207000-30d0-494b-a060-a21331af6886
	I0429 13:10:46.775894   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:46.775963   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:46.775963   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:46.775963   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:46.775963   14008 round_trippers.go:580]     Content-Length: 263
	I0429 13:10:46.775963   14008 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 13:10:46.776171   14008 api_server.go:141] control plane version: v1.30.0
	I0429 13:10:46.776230   14008 api_server.go:131] duration metric: took 6.0463314s to wait for apiserver health ...
	I0429 13:10:46.776288   14008 cni.go:84] Creating CNI manager for ""
	I0429 13:10:46.776288   14008 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 13:10:46.778656   14008 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 13:10:46.794985   14008 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 13:10:46.809433   14008 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0429 13:10:46.809433   14008 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0429 13:10:46.809433   14008 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0429 13:10:46.809433   14008 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 13:10:46.809433   14008 command_runner.go:130] > Access: 2024-04-29 13:09:07.025164922 +0000
	I0429 13:10:46.809433   14008 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0429 13:10:46.809433   14008 command_runner.go:130] > Change: 2024-04-29 13:08:56.914000000 +0000
	I0429 13:10:46.809433   14008 command_runner.go:130] >  Birth: -
	I0429 13:10:46.809433   14008 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 13:10:46.809433   14008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 13:10:46.892706   14008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 13:10:48.121191   14008 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0429 13:10:48.121306   14008 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0429 13:10:48.121306   14008 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0429 13:10:48.121306   14008 command_runner.go:130] > daemonset.apps/kindnet configured
	I0429 13:10:48.121619   14008 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.2289039s)
	I0429 13:10:48.121619   14008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 13:10:48.121619   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:10:48.122165   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.122386   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.122518   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.130235   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:10:48.130235   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.130235   14008 round_trippers.go:580]     Audit-Id: 3f664a24-4c2f-49a1-b7a7-a32a9b6e3357
	I0429 13:10:48.130235   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.130235   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.130235   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.130235   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.131224   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.133924   14008 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1913"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1885","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87720 chars]
	I0429 13:10:48.141224   14008 system_pods.go:59] 12 kube-system pods found
	I0429 13:10:48.141224   14008 system_pods.go:61] "coredns-7db6d8ff4d-ctb8n" [1141a626-d4ac-4826-a912-7b7ed378b013] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 13:10:48.141224   14008 system_pods.go:61] "etcd-multinode-409200" [b9b6b993-c1c6-46c3-8d07-0a639619f279] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 13:10:48.141224   14008 system_pods.go:61] "kindnet-7p265" [d6da7369-a131-4058-b9a2-4ee6e9ac8a4f] Running
	I0429 13:10:48.141224   14008 system_pods.go:61] "kindnet-svw9w" [81d6ce68-e391-48d1-8246-3f7047ba52e2] Running
	I0429 13:10:48.141224   14008 system_pods.go:61] "kindnet-xj48j" [adefd380-e946-47ff-b57c-3baa04e6f99c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0429 13:10:48.141224   14008 system_pods.go:61] "kube-apiserver-multinode-409200" [6b6a5200-5ddb-4315-be16-b0d86d36820f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 13:10:48.141224   14008 system_pods.go:61] "kube-controller-manager-multinode-409200" [bc75101f-63f2-4b41-a912-4d015c4fd4aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 13:10:48.141224   14008 system_pods.go:61] "kube-proxy-bbxqg" [3c4f811c-336b-4038-b6ff-d62efffacd9b] Running
	I0429 13:10:48.141224   14008 system_pods.go:61] "kube-proxy-g2jp8" [d2c926f8-0701-483c-84ae-295e7bb08fc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0429 13:10:48.141224   14008 system_pods.go:61] "kube-proxy-lwc65" [98e18062-2d8f-45d3-a8fa-dda098365db8] Running
	I0429 13:10:48.141224   14008 system_pods.go:61] "kube-scheduler-multinode-409200" [6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 13:10:48.141224   14008 system_pods.go:61] "storage-provisioner" [a200a31d-7fe5-4ebd-b4ea-f8ae593de3f9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0429 13:10:48.141224   14008 system_pods.go:74] duration metric: took 19.6047ms to wait for pod list to return data ...
	I0429 13:10:48.141224   14008 node_conditions.go:102] verifying NodePressure condition ...
	I0429 13:10:48.142240   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes
	I0429 13:10:48.142240   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.142240   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.142240   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.146250   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:48.146250   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.146250   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.146250   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.146250   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.146250   14008 round_trippers.go:580]     Audit-Id: cf7a0522-5ad0-4e7c-8eaf-2a6830f85f4c
	I0429 13:10:48.146250   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.146250   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.146250   14008 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1913"},"items":[{"metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15642 chars]
	I0429 13:10:48.146250   14008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:10:48.146250   14008 node_conditions.go:123] node cpu capacity is 2
	I0429 13:10:48.146250   14008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:10:48.146250   14008 node_conditions.go:123] node cpu capacity is 2
	I0429 13:10:48.146250   14008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:10:48.146250   14008 node_conditions.go:123] node cpu capacity is 2
	I0429 13:10:48.146250   14008 node_conditions.go:105] duration metric: took 5.0262ms to run NodePressure ...
	I0429 13:10:48.146250   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 13:10:48.620056   14008 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0429 13:10:48.620056   14008 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0429 13:10:48.620056   14008 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 13:10:48.620056   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0429 13:10:48.620056   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.620056   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.620056   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.627207   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:10:48.627266   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.627266   14008 round_trippers.go:580]     Audit-Id: ca83a831-000a-40ee-adc6-1d0ef2c54bde
	I0429 13:10:48.627266   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.627266   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.627266   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.627266   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.627266   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.627806   14008 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1918"},"items":[{"metadata":{"name":"etcd-multinode-409200","namespace":"kube-system","uid":"b9b6b993-c1c6-46c3-8d07-0a639619f279","resourceVersion":"1894","creationTimestamp":"2024-04-29T13:10:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.179.21:2379","kubernetes.io/config.hash":"e52a2c55f8d70a755b3b61d5b714d564","kubernetes.io/config.mirror":"e52a2c55f8d70a755b3b61d5b714d564","kubernetes.io/config.seen":"2024-04-29T13:10:38.679846779Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T13:10:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 30532 chars]
	I0429 13:10:48.630272   14008 kubeadm.go:733] kubelet initialised
	I0429 13:10:48.630331   14008 kubeadm.go:734] duration metric: took 10.2744ms waiting for restarted kubelet to initialise ...
	I0429 13:10:48.630420   14008 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:10:48.630578   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:10:48.630652   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.630652   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.630743   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.640861   14008 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 13:10:48.640861   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.640861   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.640861   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.640861   14008 round_trippers.go:580]     Audit-Id: bbe85283-9ebf-4d30-94f7-88f1348625f8
	I0429 13:10:48.640861   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.640861   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.640861   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.642443   14008 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1918"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1885","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87127 chars]
	I0429 13:10:48.646685   14008 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:48.646685   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 13:10:48.646685   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.646685   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.646685   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.649858   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:48.650445   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.650445   14008 round_trippers.go:580]     Audit-Id: de488676-6aca-4d9b-80b6-85dc7fdd2116
	I0429 13:10:48.650445   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.650445   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.650445   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.650445   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.650550   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.650773   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1885","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0429 13:10:48.651542   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:48.652274   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.652352   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.652352   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.654717   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:10:48.654717   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.654717   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.654717   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.655708   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.655708   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.655708   14008 round_trippers.go:580]     Audit-Id: 7adc1573-ea41-44a5-844a-53b7d89ca888
	I0429 13:10:48.655708   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.656018   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:48.656277   14008 pod_ready.go:97] node "multinode-409200" hosting pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.656277   14008 pod_ready.go:81] duration metric: took 9.5928ms for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:48.656277   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200" hosting pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.656277   14008 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:48.656277   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-409200
	I0429 13:10:48.656277   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.656277   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.656277   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.660017   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:48.660017   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.660017   14008 round_trippers.go:580]     Audit-Id: fe9d4865-509e-43c8-ae28-7f276d119e1e
	I0429 13:10:48.660017   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.660017   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.660017   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.660017   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.660017   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.661477   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-409200","namespace":"kube-system","uid":"b9b6b993-c1c6-46c3-8d07-0a639619f279","resourceVersion":"1894","creationTimestamp":"2024-04-29T13:10:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.179.21:2379","kubernetes.io/config.hash":"e52a2c55f8d70a755b3b61d5b714d564","kubernetes.io/config.mirror":"e52a2c55f8d70a755b3b61d5b714d564","kubernetes.io/config.seen":"2024-04-29T13:10:38.679846779Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T13:10:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0429 13:10:48.662269   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:48.663360   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.663360   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.663360   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.667241   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:48.667241   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.667241   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.667241   14008 round_trippers.go:580]     Audit-Id: 1aea0693-a68a-4329-9d76-ad1b5a3a2c21
	I0429 13:10:48.667241   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.667241   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.667241   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.667241   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.667241   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:48.667241   14008 pod_ready.go:97] node "multinode-409200" hosting pod "etcd-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.667241   14008 pod_ready.go:81] duration metric: took 10.9638ms for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:48.667241   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200" hosting pod "etcd-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.667241   14008 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:48.667241   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-409200
	I0429 13:10:48.667241   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.668255   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.668255   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.670266   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:10:48.670266   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.670266   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.670266   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.670266   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.670266   14008 round_trippers.go:580]     Audit-Id: ec6b3514-7584-4b2d-9e19-fe4062d24ff7
	I0429 13:10:48.670266   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.670266   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.671248   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-409200","namespace":"kube-system","uid":"6b6a5200-5ddb-4315-be16-b0d86d36820f","resourceVersion":"1890","creationTimestamp":"2024-04-29T13:10:45Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.179.21:8443","kubernetes.io/config.hash":"67a711354a194289dea1aee475e07833","kubernetes.io/config.mirror":"67a711354a194289dea1aee475e07833","kubernetes.io/config.seen":"2024-04-29T13:10:38.602845937Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T13:10:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7939 chars]
	I0429 13:10:48.671248   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:48.671248   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.671248   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.671248   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.675310   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:48.675310   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.675310   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.675310   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.675310   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.675310   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.675310   14008 round_trippers.go:580]     Audit-Id: f260a138-3a6a-47a2-b11b-7d8b6ab61109
	I0429 13:10:48.675310   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.678251   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:48.678251   14008 pod_ready.go:97] node "multinode-409200" hosting pod "kube-apiserver-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.678251   14008 pod_ready.go:81] duration metric: took 11.0094ms for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:48.678251   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200" hosting pod "kube-apiserver-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.678251   14008 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:48.678251   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-409200
	I0429 13:10:48.678251   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.678251   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.679253   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.686258   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:10:48.686806   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.686806   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.686806   14008 round_trippers.go:580]     Audit-Id: 84051d4b-0338-46e3-9ed4-8858dd2633f1
	I0429 13:10:48.686806   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.686806   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.686806   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.686806   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.686806   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-409200","namespace":"kube-system","uid":"bc75101f-63f2-4b41-a912-4d015c4fd4aa","resourceVersion":"1880","creationTimestamp":"2024-04-29T12:44:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.mirror":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.seen":"2024-04-29T12:44:32.885750739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0429 13:10:48.687652   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:48.687715   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.687715   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.687715   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.690402   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:10:48.690979   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.691031   14008 round_trippers.go:580]     Audit-Id: 00d411dd-e57f-4e1c-a643-9de858e65797
	I0429 13:10:48.691031   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.691031   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.691031   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.691031   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.691031   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.692635   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:48.693331   14008 pod_ready.go:97] node "multinode-409200" hosting pod "kube-controller-manager-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.693331   14008 pod_ready.go:81] duration metric: took 15.0801ms for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:48.693395   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200" hosting pod "kube-controller-manager-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.693395   14008 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bbxqg" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:48.829030   14008 request.go:629] Waited for 135.6337ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbxqg
	I0429 13:10:48.829349   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbxqg
	I0429 13:10:48.829421   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.829421   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.829421   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.833267   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:48.833267   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.833585   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.833585   14008 round_trippers.go:580]     Audit-Id: 5edf7f33-c1e1-4bc5-8593-ff7952c710ec
	I0429 13:10:48.833585   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.833585   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.833585   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.833585   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.833962   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bbxqg","generateName":"kube-proxy-","namespace":"kube-system","uid":"3c4f811c-336b-4038-b6ff-d62efffacd9b","resourceVersion":"1429","creationTimestamp":"2024-04-29T12:52:37Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0429 13:10:49.020427   14008 request.go:629] Waited for 185.624ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m03
	I0429 13:10:49.020669   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m03
	I0429 13:10:49.020744   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:49.020744   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:49.020809   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:49.025435   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:49.025435   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:49.025435   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:49.025435   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:49.025435   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:49 GMT
	I0429 13:10:49.025435   14008 round_trippers.go:580]     Audit-Id: 00bb0595-0536-41ca-9e19-5898faa60fb5
	I0429 13:10:49.025435   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:49.025435   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:49.027444   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m03","uid":"d4d7c143-2c53-4eb2-9323-5c1ee0d251ea","resourceVersion":"1438","creationTimestamp":"2024-04-29T12:52:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_52_38_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4412 chars]
	I0429 13:10:49.027444   14008 pod_ready.go:97] node "multinode-409200-m03" hosting pod "kube-proxy-bbxqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200-m03" has status "Ready":"Unknown"
	I0429 13:10:49.027444   14008 pod_ready.go:81] duration metric: took 334.0466ms for pod "kube-proxy-bbxqg" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:49.027988   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200-m03" hosting pod "kube-proxy-bbxqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200-m03" has status "Ready":"Unknown"
	I0429 13:10:49.027988   14008 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:49.220324   14008 request.go:629] Waited for 192.2313ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 13:10:49.220542   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 13:10:49.220542   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:49.220542   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:49.220542   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:49.227711   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:10:49.227711   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:49.227711   14008 round_trippers.go:580]     Audit-Id: 8d483a1a-9ff5-4e63-9b3c-45793ff78cba
	I0429 13:10:49.227711   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:49.227711   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:49.227711   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:49.227711   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:49.227711   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:49 GMT
	I0429 13:10:49.227711   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g2jp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"d2c926f8-0701-483c-84ae-295e7bb08fc9","resourceVersion":"1916","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0429 13:10:49.426771   14008 request.go:629] Waited for 197.7004ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:49.426852   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:49.426852   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:49.426852   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:49.426852   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:49.431517   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:49.431584   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:49.431584   14008 round_trippers.go:580]     Audit-Id: 4d99da3d-fd99-4427-9605-0f4236d4fd28
	I0429 13:10:49.431584   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:49.431584   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:49.431584   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:49.431584   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:49.431584   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:49 GMT
	I0429 13:10:49.431871   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:49.432562   14008 pod_ready.go:97] node "multinode-409200" hosting pod "kube-proxy-g2jp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:49.432562   14008 pod_ready.go:81] duration metric: took 404.5254ms for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:49.432562   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200" hosting pod "kube-proxy-g2jp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:49.432562   14008 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lwc65" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:49.632744   14008 request.go:629] Waited for 200.18ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwc65
	I0429 13:10:49.633072   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwc65
	I0429 13:10:49.633072   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:49.633072   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:49.633072   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:49.639443   14008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 13:10:49.639905   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:49.639905   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:49 GMT
	I0429 13:10:49.639905   14008 round_trippers.go:580]     Audit-Id: 72de9fa0-738e-447d-a427-6e703b29e0ff
	I0429 13:10:49.639905   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:49.639905   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:49.639905   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:49.639905   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:49.640257   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lwc65","generateName":"kube-proxy-","namespace":"kube-system","uid":"98e18062-2d8f-45d3-a8fa-dda098365db8","resourceVersion":"606","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0429 13:10:49.835337   14008 request.go:629] Waited for 194.372ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m02
	I0429 13:10:49.835561   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m02
	I0429 13:10:49.835561   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:49.835832   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:49.835832   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:49.838221   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:10:49.838221   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:49.838221   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:49.838221   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:49.838221   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:49.839146   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:49.839146   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:49 GMT
	I0429 13:10:49.839146   14008 round_trippers.go:580]     Audit-Id: 31a10e42-3e10-4d66-9f21-bff51f21e720
	I0429 13:10:49.840124   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"1622","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0429 13:10:49.840638   14008 pod_ready.go:92] pod "kube-proxy-lwc65" in "kube-system" namespace has status "Ready":"True"
	I0429 13:10:49.840638   14008 pod_ready.go:81] duration metric: took 408.0728ms for pod "kube-proxy-lwc65" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:49.840698   14008 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:50.021678   14008 request.go:629] Waited for 180.7017ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 13:10:50.021712   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 13:10:50.021712   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:50.021712   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:50.021712   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:50.026413   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:50.026413   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:50.026413   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:50.026413   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:50.026413   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:50.026413   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:50 GMT
	I0429 13:10:50.026413   14008 round_trippers.go:580]     Audit-Id: ebde3580-e7a5-4806-ac10-44c83996ef61
	I0429 13:10:50.026413   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:50.026413   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-409200","namespace":"kube-system","uid":"6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266","resourceVersion":"1888","creationTimestamp":"2024-04-29T12:44:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.mirror":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.seen":"2024-04-29T12:44:24.392867685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5444 chars]
	I0429 13:10:50.223857   14008 request.go:629] Waited for 196.2954ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:50.223926   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:50.223926   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:50.224012   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:50.224012   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:50.230703   14008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 13:10:50.230703   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:50.230703   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:50.230703   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:50.230703   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:50 GMT
	I0429 13:10:50.230703   14008 round_trippers.go:580]     Audit-Id: 62b1a773-0f5a-40e2-bf88-8b85bd45512d
	I0429 13:10:50.230703   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:50.230703   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:50.232527   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:50.233182   14008 pod_ready.go:97] node "multinode-409200" hosting pod "kube-scheduler-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:50.233280   14008 pod_ready.go:81] duration metric: took 392.5789ms for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:50.233375   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200" hosting pod "kube-scheduler-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:50.233375   14008 pod_ready.go:38] duration metric: took 1.6029421s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:10:50.233436   14008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 13:10:50.259196   14008 command_runner.go:130] > -16
	I0429 13:10:50.259196   14008 ops.go:34] apiserver oom_adj: -16
	I0429 13:10:50.259196   14008 kubeadm.go:591] duration metric: took 15.1579109s to restartPrimaryControlPlane
	I0429 13:10:50.259196   14008 kubeadm.go:393] duration metric: took 15.2334532s to StartCluster
	I0429 13:10:50.259196   14008 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:10:50.259196   14008 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 13:10:50.261185   14008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:10:50.262954   14008 start.go:234] Will wait 6m0s for node &{Name: IP:172.26.179.21 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 13:10:50.262954   14008 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 13:10:50.270419   14008 out.go:177] * Verifying Kubernetes components...
	I0429 13:10:50.263231   14008 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:10:50.274484   14008 out.go:177] * Enabled addons: 
	I0429 13:10:50.276376   14008 addons.go:505] duration metric: took 13.4222ms for enable addons: enabled=[]
	I0429 13:10:50.286439   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:10:50.641173   14008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:10:50.673274   14008 node_ready.go:35] waiting up to 6m0s for node "multinode-409200" to be "Ready" ...
	I0429 13:10:50.673439   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:50.673439   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:50.673439   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:50.673439   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:50.678103   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:50.678103   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:50.678103   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:50.678103   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:50 GMT
	I0429 13:10:50.678103   14008 round_trippers.go:580]     Audit-Id: 699344d7-dc50-443f-8bb8-c3f244cdd007
	I0429 13:10:50.678953   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:50.678953   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:50.678953   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:50.679065   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:51.187040   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:51.187040   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:51.187040   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:51.187040   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:51.193994   14008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 13:10:51.193994   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:51.193994   14008 round_trippers.go:580]     Audit-Id: b3ae6189-8206-4fc9-b2a8-0715179385e7
	I0429 13:10:51.193994   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:51.193994   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:51.193994   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:51.193994   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:51.193994   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:51 GMT
	I0429 13:10:51.194837   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:51.686449   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:51.686449   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:51.686449   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:51.686449   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:51.689860   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:51.690684   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:51.690684   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:51.690684   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:51.690684   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:51.690684   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:51 GMT
	I0429 13:10:51.690684   14008 round_trippers.go:580]     Audit-Id: 65887080-e0b5-4f49-b7c2-d4b66d35bdd2
	I0429 13:10:51.690684   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:51.690851   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:52.185891   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:52.186016   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:52.186016   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:52.186016   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:52.190578   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:52.190578   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:52.190578   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:52.191399   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:52.191399   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:52.191399   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:52 GMT
	I0429 13:10:52.191399   14008 round_trippers.go:580]     Audit-Id: b7cd4ee3-6e22-4c54-afc7-9c1f7ac94664
	I0429 13:10:52.191399   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:52.191894   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:52.689207   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:52.689207   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:52.689301   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:52.689301   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:52.693631   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:52.694114   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:52.694114   14008 round_trippers.go:580]     Audit-Id: 62071d05-220a-4ad3-9ab3-86d77884b456
	I0429 13:10:52.694114   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:52.694114   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:52.694114   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:52.694114   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:52.694114   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:52 GMT
	I0429 13:10:52.694316   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:52.695025   14008 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 13:10:53.173921   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:53.174073   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:53.174073   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:53.174073   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:53.178651   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:53.178832   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:53.178832   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:53.178832   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:53.178832   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:53.178832   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:53.178832   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:53 GMT
	I0429 13:10:53.178832   14008 round_trippers.go:580]     Audit-Id: d17eb130-dc4d-4ee8-9c1a-dc515c698603
	I0429 13:10:53.180046   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:53.684582   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:53.684582   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:53.684582   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:53.684582   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:53.687882   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:53.687882   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:53.687882   14008 round_trippers.go:580]     Audit-Id: 8b348bde-6af0-403f-98c6-8d85b64cd648
	I0429 13:10:53.687882   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:53.687882   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:53.687882   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:53.687882   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:53.687882   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:53 GMT
	I0429 13:10:53.689155   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:54.185299   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:54.185299   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:54.185299   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:54.185299   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:54.188998   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:54.188998   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:54.188998   14008 round_trippers.go:580]     Audit-Id: c7d09bd5-5fcf-467e-bfb9-8679a52f5c5d
	I0429 13:10:54.188998   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:54.188998   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:54.188998   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:54.188998   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:54.188998   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:54 GMT
	I0429 13:10:54.190048   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:54.673891   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:54.673891   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:54.673891   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:54.673891   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:54.678054   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:54.678054   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:54.678054   14008 round_trippers.go:580]     Audit-Id: 38ce8517-e351-4205-b5d2-66c694638301
	I0429 13:10:54.678054   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:54.678054   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:54.678054   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:54.678054   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:54.678054   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:54 GMT
	I0429 13:10:54.678566   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:55.174735   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:55.174735   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:55.175005   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:55.175005   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:55.178802   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:55.178802   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:55.178802   14008 round_trippers.go:580]     Audit-Id: 0d29a364-3088-4e71-a6bc-35ba4f50f0b3
	I0429 13:10:55.178802   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:55.178802   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:55.178802   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:55.179283   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:55.179283   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:55 GMT
	I0429 13:10:55.179336   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:55.180509   14008 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 13:10:55.686822   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:55.686822   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:55.686936   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:55.686936   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:55.690872   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:55.690872   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:55.690872   14008 round_trippers.go:580]     Audit-Id: de561c4f-b2f7-4c84-8436-86c5d9aad6a6
	I0429 13:10:55.691876   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:55.691900   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:55.691900   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:55.691900   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:55.691900   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:55 GMT
	I0429 13:10:55.692067   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:56.186057   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:56.186057   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:56.186057   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:56.186057   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:56.194215   14008 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 13:10:56.194215   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:56.194215   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:56.194215   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:56 GMT
	I0429 13:10:56.194215   14008 round_trippers.go:580]     Audit-Id: 0b05603a-fd3b-47aa-b9dd-d6d9ba401b0e
	I0429 13:10:56.194215   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:56.194215   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:56.194215   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:56.194215   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:56.685941   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:56.686159   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:56.686159   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:56.686159   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:56.689787   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:56.689787   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:56.689787   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:56.689787   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:56.690176   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:56.690176   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:56.690176   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:56 GMT
	I0429 13:10:56.690176   14008 round_trippers.go:580]     Audit-Id: 1608da55-4b88-4ddc-a1ed-6f303537ac49
	I0429 13:10:56.690734   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:57.175684   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:57.175684   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:57.175684   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:57.175684   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:57.179875   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:57.179875   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:57.179875   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:57.179875   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:57 GMT
	I0429 13:10:57.179875   14008 round_trippers.go:580]     Audit-Id: 5fd5d2d3-c9cd-42b0-9006-6bf7d8e9c720
	I0429 13:10:57.179875   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:57.179875   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:57.179875   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:57.180133   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:57.180645   14008 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 13:10:57.681468   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:57.681647   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:57.681714   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:57.681714   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:57.685186   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:57.685360   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:57.685360   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:57.685360   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:57.685360   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:57 GMT
	I0429 13:10:57.685360   14008 round_trippers.go:580]     Audit-Id: c881d06d-0a1c-46c0-8003-3a221c9d55b7
	I0429 13:10:57.685360   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:57.685360   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:57.685557   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:58.174238   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:58.174238   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:58.174238   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:58.174628   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:58.186500   14008 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 13:10:58.187071   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:58.187071   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:58.187071   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:58 GMT
	I0429 13:10:58.187071   14008 round_trippers.go:580]     Audit-Id: 99912078-5db9-4af2-9574-35a6183f2914
	I0429 13:10:58.187071   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:58.187071   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:58.187071   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:58.187498   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:10:58.675893   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:58.675893   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:58.675893   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:58.675893   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:58.679487   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:58.679487   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:58.679487   14008 round_trippers.go:580]     Audit-Id: 3e485fdf-bf18-4cf9-8b68-f7d20ec2614e
	I0429 13:10:58.680468   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:58.680468   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:58.680491   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:58.680491   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:58.680491   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:58 GMT
	I0429 13:10:58.680696   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:10:59.180708   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:59.180708   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:59.180708   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:59.180708   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:59.185281   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:59.185281   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:59.185281   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:59 GMT
	I0429 13:10:59.185741   14008 round_trippers.go:580]     Audit-Id: 7d71d40f-2b4a-4064-bf22-b1cfd6f58661
	I0429 13:10:59.185741   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:59.185741   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:59.185741   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:59.185741   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:59.185741   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:10:59.185741   14008 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 13:10:59.679696   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:59.679696   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:59.679696   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:59.679696   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:59.684814   14008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 13:10:59.684814   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:59.684814   14008 round_trippers.go:580]     Audit-Id: ecbde693-8c03-4c50-b63f-09e81ec97c94
	I0429 13:10:59.684814   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:59.684814   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:59.684814   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:59.684814   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:59.684814   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:59 GMT
	I0429 13:10:59.685414   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:00.183408   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:00.183408   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:00.183408   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:00.183408   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:00.187970   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:00.188320   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:00.188320   14008 round_trippers.go:580]     Audit-Id: 52537434-1493-4cc8-a7c1-e21ffa705563
	I0429 13:11:00.188320   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:00.188320   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:00.188320   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:00.188320   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:00.188320   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:00 GMT
	I0429 13:11:00.188320   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:00.685922   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:00.685922   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:00.685922   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:00.685922   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:00.692353   14008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 13:11:00.692419   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:00.692419   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:00.692419   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:00.692419   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:00.692419   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:00 GMT
	I0429 13:11:00.692419   14008 round_trippers.go:580]     Audit-Id: 65f84017-4148-4249-982e-e140f7c5963a
	I0429 13:11:00.692419   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:00.693988   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:01.184436   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:01.184436   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:01.184436   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:01.184436   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:01.189096   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:01.189529   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:01.189529   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:01 GMT
	I0429 13:11:01.189529   14008 round_trippers.go:580]     Audit-Id: 9854f80d-d87b-43dc-86d3-0fd83aaa798d
	I0429 13:11:01.189529   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:01.189529   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:01.189529   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:01.189529   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:01.189805   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:01.190096   14008 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 13:11:01.685925   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:01.686069   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:01.686069   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:01.686069   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:01.690690   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:01.690690   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:01.690690   14008 round_trippers.go:580]     Audit-Id: 393e4a1b-e82d-44a1-88dc-e2fc729a0692
	I0429 13:11:01.690690   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:01.690690   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:01.690690   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:01.690690   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:01.690690   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:01 GMT
	I0429 13:11:01.691329   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:02.176381   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:02.176440   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:02.176440   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:02.176440   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:02.179776   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:02.180110   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:02.180110   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:02.180110   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:02.180177   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:02.180177   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:02.180177   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:02 GMT
	I0429 13:11:02.180177   14008 round_trippers.go:580]     Audit-Id: 8fcd6c0e-879b-4656-ae86-534dbbad60cd
	I0429 13:11:02.180545   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:02.685257   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:02.685317   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:02.685377   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:02.685377   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:02.688791   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:02.688791   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:02.689379   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:02.689379   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:02.689379   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:02.689379   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:02.689379   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:02 GMT
	I0429 13:11:02.689379   14008 round_trippers.go:580]     Audit-Id: 70acdfda-063c-41f2-a921-71007eba8c2f
	I0429 13:11:02.689576   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:03.178489   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:03.178489   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:03.178489   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:03.178582   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:03.181100   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:11:03.181100   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:03.181100   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:03.182150   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:03 GMT
	I0429 13:11:03.182150   14008 round_trippers.go:580]     Audit-Id: e98aa19b-a8e4-4846-ae40-8199e4df5111
	I0429 13:11:03.182150   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:03.182150   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:03.182214   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:03.182631   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:03.678747   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:03.678968   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:03.678968   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:03.678968   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:03.681823   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:11:03.682818   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:03.682865   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:03.682865   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:03.682865   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:03 GMT
	I0429 13:11:03.682865   14008 round_trippers.go:580]     Audit-Id: fc371a51-6008-4662-b5b1-fe3f0b7151d1
	I0429 13:11:03.682865   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:03.682865   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:03.683189   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:03.684302   14008 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 13:11:04.187980   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:04.187980   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:04.187980   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:04.187980   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:04.192558   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:04.192558   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:04.192558   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:04.193375   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:04.193375   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:04.193375   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:04.193375   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:04 GMT
	I0429 13:11:04.193375   14008 round_trippers.go:580]     Audit-Id: 42accb44-3a3e-4a4d-b6de-7a2d1b947189
	I0429 13:11:04.193449   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:04.687973   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:04.687973   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:04.687973   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:04.687973   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:04.691528   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:04.691528   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:04.691528   14008 round_trippers.go:580]     Audit-Id: abd7d680-cd3e-4f06-9d00-991f9c31fedc
	I0429 13:11:04.691528   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:04.691528   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:04.691528   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:04.691528   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:04.691528   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:04 GMT
	I0429 13:11:04.692327   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:05.173907   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:05.173986   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.173986   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.173986   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.181062   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:11:05.181134   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.181134   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.181134   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.181134   14008 round_trippers.go:580]     Audit-Id: 560eaec6-f5b1-4687-b8b7-9b642ee3a93d
	I0429 13:11:05.181202   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.181202   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.181202   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.181449   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:05.675709   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:05.675709   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.675709   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.675709   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.680276   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:05.680663   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.680663   14008 round_trippers.go:580]     Audit-Id: e72edc66-d778-4ce8-8de1-9b91fe2614b1
	I0429 13:11:05.680663   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.680663   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.680663   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.680663   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.680663   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.680872   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:05.681252   14008 node_ready.go:49] node "multinode-409200" has status "Ready":"True"
	I0429 13:11:05.681252   14008 node_ready.go:38] duration metric: took 15.0077603s for node "multinode-409200" to be "Ready" ...
	I0429 13:11:05.681252   14008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:11:05.681252   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:11:05.681252   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.681252   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.681252   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.691235   14008 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 13:11:05.691235   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.691235   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.691235   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.691235   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.691235   14008 round_trippers.go:580]     Audit-Id: 2b065644-9acf-46ae-913a-2603d5ced794
	I0429 13:11:05.691235   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.691235   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.692613   14008 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1978"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1967","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86100 chars]
	I0429 13:11:05.697408   14008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.697547   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 13:11:05.697605   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.697605   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.697605   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.701372   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:05.701372   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.701372   14008 round_trippers.go:580]     Audit-Id: 2e269bf8-1e53-4328-8bca-1e372edfddb3
	I0429 13:11:05.701372   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.701372   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.701372   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.701372   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.701372   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.701372   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1967","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6788 chars]
	I0429 13:11:05.702142   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:05.702217   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.702217   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.702217   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.704972   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:11:05.704972   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.704972   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.705629   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.705629   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.705629   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.705629   14008 round_trippers.go:580]     Audit-Id: 6d470ec0-c21d-4cde-9493-159632c5149e
	I0429 13:11:05.705629   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.705970   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:05.706845   14008 pod_ready.go:92] pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:05.706908   14008 pod_ready.go:81] duration metric: took 9.3693ms for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.706908   14008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.706967   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-409200
	I0429 13:11:05.706967   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.707056   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.707056   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.710092   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:05.710092   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.710236   14008 round_trippers.go:580]     Audit-Id: 084586b1-d566-4a6c-9105-c97f15185847
	I0429 13:11:05.710236   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.710236   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.710236   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.710236   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.710236   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.710471   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-409200","namespace":"kube-system","uid":"b9b6b993-c1c6-46c3-8d07-0a639619f279","resourceVersion":"1952","creationTimestamp":"2024-04-29T13:10:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.179.21:2379","kubernetes.io/config.hash":"e52a2c55f8d70a755b3b61d5b714d564","kubernetes.io/config.mirror":"e52a2c55f8d70a755b3b61d5b714d564","kubernetes.io/config.seen":"2024-04-29T13:10:38.679846779Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T13:10:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6160 chars]
	I0429 13:11:05.710554   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:05.710554   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.710554   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.710554   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.715220   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:05.715220   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.715220   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.715220   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.715220   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.715220   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.715220   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.715220   14008 round_trippers.go:580]     Audit-Id: 644262d3-c088-426b-9f3a-615b950790dd
	I0429 13:11:05.715907   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:05.715907   14008 pod_ready.go:92] pod "etcd-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:05.715907   14008 pod_ready.go:81] duration metric: took 8.9992ms for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.715907   14008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.715907   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-409200
	I0429 13:11:05.715907   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.715907   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.715907   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.720114   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:05.720188   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.720188   14008 round_trippers.go:580]     Audit-Id: 1b5dd591-0a57-4c1d-bd36-71812be7721f
	I0429 13:11:05.720188   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.720188   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.720188   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.720188   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.720188   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.720485   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-409200","namespace":"kube-system","uid":"6b6a5200-5ddb-4315-be16-b0d86d36820f","resourceVersion":"1954","creationTimestamp":"2024-04-29T13:10:45Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.179.21:8443","kubernetes.io/config.hash":"67a711354a194289dea1aee475e07833","kubernetes.io/config.mirror":"67a711354a194289dea1aee475e07833","kubernetes.io/config.seen":"2024-04-29T13:10:38.602845937Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T13:10:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7695 chars]
	I0429 13:11:05.721347   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:05.721347   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.721347   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.721347   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.724243   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:11:05.724243   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.724243   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.725182   14008 round_trippers.go:580]     Audit-Id: e68c256a-c95b-46e8-8a7e-0503156e865b
	I0429 13:11:05.725182   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.725182   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.725182   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.725182   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.726030   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:05.726728   14008 pod_ready.go:92] pod "kube-apiserver-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:05.726728   14008 pod_ready.go:81] duration metric: took 10.8211ms for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.726728   14008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.726728   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-409200
	I0429 13:11:05.726728   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.726728   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.726728   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.730308   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:05.730308   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.730308   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.730308   14008 round_trippers.go:580]     Audit-Id: 6dee5fdc-0be7-437c-9b4c-ee1c4d738f18
	I0429 13:11:05.730308   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.730308   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.730308   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.730308   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.731100   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-409200","namespace":"kube-system","uid":"bc75101f-63f2-4b41-a912-4d015c4fd4aa","resourceVersion":"1935","creationTimestamp":"2024-04-29T12:44:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.mirror":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.seen":"2024-04-29T12:44:32.885750739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0429 13:11:05.731664   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:05.731731   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.731731   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.731731   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.734372   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:11:05.734372   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.734372   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.734372   14008 round_trippers.go:580]     Audit-Id: fe3df153-5467-464c-a3b5-3b8365511d0d
	I0429 13:11:05.734372   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.734372   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.734372   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.734372   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.735222   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:05.735672   14008 pod_ready.go:92] pod "kube-controller-manager-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:05.735793   14008 pod_ready.go:81] duration metric: took 9.0643ms for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.735793   14008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bbxqg" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.879512   14008 request.go:629] Waited for 143.7185ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbxqg
	I0429 13:11:05.879671   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbxqg
	I0429 13:11:05.879671   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.879671   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.879671   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.883626   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:05.884628   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.885018   14008 round_trippers.go:580]     Audit-Id: c1e48354-ef13-4b35-97a5-dbf41ae2d8b3
	I0429 13:11:05.885135   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.885135   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.885135   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.885135   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.885230   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.886475   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bbxqg","generateName":"kube-proxy-","namespace":"kube-system","uid":"3c4f811c-336b-4038-b6ff-d62efffacd9b","resourceVersion":"1429","creationTimestamp":"2024-04-29T12:52:37Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0429 13:11:06.084506   14008 request.go:629] Waited for 196.9071ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m03
	I0429 13:11:06.084681   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m03
	I0429 13:11:06.084681   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:06.084681   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:06.084681   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:06.090381   14008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 13:11:06.090651   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:06.090651   14008 round_trippers.go:580]     Audit-Id: 4fab6aff-c696-41a8-9796-c573426356ad
	I0429 13:11:06.090717   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:06.090717   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:06.090717   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:06.090717   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:06.090717   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:06 GMT
	I0429 13:11:06.090996   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m03","uid":"d4d7c143-2c53-4eb2-9323-5c1ee0d251ea","resourceVersion":"1943","creationTimestamp":"2024-04-29T12:52:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_52_38_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4315 chars]
	I0429 13:11:06.091527   14008 pod_ready.go:97] node "multinode-409200-m03" hosting pod "kube-proxy-bbxqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200-m03" has status "Ready":"Unknown"
	I0429 13:11:06.091651   14008 pod_ready.go:81] duration metric: took 355.8558ms for pod "kube-proxy-bbxqg" in "kube-system" namespace to be "Ready" ...
	E0429 13:11:06.091651   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200-m03" hosting pod "kube-proxy-bbxqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200-m03" has status "Ready":"Unknown"
	I0429 13:11:06.091651   14008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:06.290939   14008 request.go:629] Waited for 199.0777ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 13:11:06.291211   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 13:11:06.291274   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:06.291274   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:06.291274   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:06.298143   14008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 13:11:06.298143   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:06.298143   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:06.298143   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:06.298143   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:06 GMT
	I0429 13:11:06.298143   14008 round_trippers.go:580]     Audit-Id: 747a95d3-c072-463e-9db2-88d7e12ed5ca
	I0429 13:11:06.298143   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:06.298143   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:06.298920   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g2jp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"d2c926f8-0701-483c-84ae-295e7bb08fc9","resourceVersion":"1916","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0429 13:11:06.491212   14008 request.go:629] Waited for 192.0395ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:06.491669   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:06.491669   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:06.491731   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:06.491731   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:06.494958   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:06.494958   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:06.494958   14008 round_trippers.go:580]     Audit-Id: ef6f83d6-1d6b-4e54-9cb4-d33a6f354d5c
	I0429 13:11:06.494958   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:06.494958   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:06.494958   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:06.494958   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:06.494958   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:06 GMT
	I0429 13:11:06.498383   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:06.498977   14008 pod_ready.go:92] pod "kube-proxy-g2jp8" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:06.499080   14008 pod_ready.go:81] duration metric: took 407.4257ms for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:06.499080   14008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lwc65" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:06.678596   14008 request.go:629] Waited for 179.3821ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwc65
	I0429 13:11:06.678847   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwc65
	I0429 13:11:06.678847   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:06.678847   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:06.678847   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:06.684227   14008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 13:11:06.684227   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:06.684227   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:06 GMT
	I0429 13:11:06.684227   14008 round_trippers.go:580]     Audit-Id: 0adc8eb6-9903-4d4f-9b24-7879b44914a1
	I0429 13:11:06.684227   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:06.684227   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:06.684227   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:06.684663   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:06.684795   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lwc65","generateName":"kube-proxy-","namespace":"kube-system","uid":"98e18062-2d8f-45d3-a8fa-dda098365db8","resourceVersion":"606","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0429 13:11:06.882781   14008 request.go:629] Waited for 197.6295ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m02
	I0429 13:11:06.883033   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m02
	I0429 13:11:06.883033   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:06.883033   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:06.883033   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:06.886819   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:06.886819   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:06.886819   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:06.886819   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:06 GMT
	I0429 13:11:06.886819   14008 round_trippers.go:580]     Audit-Id: c6b740c7-31f1-429b-ba96-c3e365079573
	I0429 13:11:06.886819   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:06.886819   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:06.886819   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:06.887828   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"1622","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0429 13:11:06.887828   14008 pod_ready.go:92] pod "kube-proxy-lwc65" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:06.887828   14008 pod_ready.go:81] duration metric: took 388.7453ms for pod "kube-proxy-lwc65" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:06.887828   14008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:07.086917   14008 request.go:629] Waited for 198.8928ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 13:11:07.087113   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 13:11:07.087113   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:07.087113   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:07.087304   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:07.093037   14008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 13:11:07.093037   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:07.093037   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:07 GMT
	I0429 13:11:07.093037   14008 round_trippers.go:580]     Audit-Id: 1e2b60d1-6818-4c6b-b33c-5d9514c5c89d
	I0429 13:11:07.093037   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:07.093037   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:07.093037   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:07.093037   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:07.093331   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-409200","namespace":"kube-system","uid":"6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266","resourceVersion":"1934","creationTimestamp":"2024-04-29T12:44:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.mirror":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.seen":"2024-04-29T12:44:24.392867685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0429 13:11:07.290286   14008 request.go:629] Waited for 196.5112ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:07.290286   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:07.290286   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:07.290286   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:07.290286   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:07.294919   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:07.294919   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:07.294919   14008 round_trippers.go:580]     Audit-Id: 2c7a4793-9967-4e0f-aae9-77addb3ebd01
	I0429 13:11:07.294919   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:07.294919   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:07.294919   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:07.294919   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:07.294919   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:07 GMT
	I0429 13:11:07.295888   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:07.296497   14008 pod_ready.go:92] pod "kube-scheduler-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:07.296608   14008 pod_ready.go:81] duration metric: took 408.7766ms for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:07.296628   14008 pod_ready.go:38] duration metric: took 1.6153642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:11:07.296665   14008 api_server.go:52] waiting for apiserver process to appear ...
	I0429 13:11:07.312809   14008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:11:07.345695   14008 command_runner.go:130] > 1888
	I0429 13:11:07.346302   14008 api_server.go:72] duration metric: took 17.0832182s to wait for apiserver process to appear ...
	I0429 13:11:07.346302   14008 api_server.go:88] waiting for apiserver healthz status ...
	I0429 13:11:07.346302   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:11:07.356740   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 200:
	ok
	I0429 13:11:07.357463   14008 round_trippers.go:463] GET https://172.26.179.21:8443/version
	I0429 13:11:07.357463   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:07.357463   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:07.357463   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:07.360052   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:11:07.360052   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:07.360052   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:07.360192   14008 round_trippers.go:580]     Content-Length: 263
	I0429 13:11:07.360192   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:07 GMT
	I0429 13:11:07.360192   14008 round_trippers.go:580]     Audit-Id: 3e362bae-65b1-4699-9423-6123b744af12
	I0429 13:11:07.360192   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:07.360192   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:07.360192   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:07.360192   14008 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 13:11:07.360352   14008 api_server.go:141] control plane version: v1.30.0
	I0429 13:11:07.360446   14008 api_server.go:131] duration metric: took 14.1446ms to wait for apiserver health ...
	I0429 13:11:07.360502   14008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 13:11:07.478195   14008 request.go:629] Waited for 117.4453ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:11:07.478195   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:11:07.478480   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:07.478480   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:07.478480   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:07.486841   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:11:07.486841   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:07.486841   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:07.486841   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:07.486841   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:07.486841   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:07.486841   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:07 GMT
	I0429 13:11:07.486841   14008 round_trippers.go:580]     Audit-Id: f6f584cb-eaf3-47d2-8ff1-a01a43753afd
	I0429 13:11:07.488927   14008 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1981"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1967","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86100 chars]
	I0429 13:11:07.493205   14008 system_pods.go:59] 12 kube-system pods found
	I0429 13:11:07.493205   14008 system_pods.go:61] "coredns-7db6d8ff4d-ctb8n" [1141a626-d4ac-4826-a912-7b7ed378b013] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "etcd-multinode-409200" [b9b6b993-c1c6-46c3-8d07-0a639619f279] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kindnet-7p265" [d6da7369-a131-4058-b9a2-4ee6e9ac8a4f] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kindnet-svw9w" [81d6ce68-e391-48d1-8246-3f7047ba52e2] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kindnet-xj48j" [adefd380-e946-47ff-b57c-3baa04e6f99c] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kube-apiserver-multinode-409200" [6b6a5200-5ddb-4315-be16-b0d86d36820f] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kube-controller-manager-multinode-409200" [bc75101f-63f2-4b41-a912-4d015c4fd4aa] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kube-proxy-bbxqg" [3c4f811c-336b-4038-b6ff-d62efffacd9b] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kube-proxy-g2jp8" [d2c926f8-0701-483c-84ae-295e7bb08fc9] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kube-proxy-lwc65" [98e18062-2d8f-45d3-a8fa-dda098365db8] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kube-scheduler-multinode-409200" [6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "storage-provisioner" [a200a31d-7fe5-4ebd-b4ea-f8ae593de3f9] Running
	I0429 13:11:07.493794   14008 system_pods.go:74] duration metric: took 132.7027ms to wait for pod list to return data ...
	I0429 13:11:07.493794   14008 default_sa.go:34] waiting for default service account to be created ...
	I0429 13:11:07.679506   14008 request.go:629] Waited for 185.4727ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/default/serviceaccounts
	I0429 13:11:07.679506   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/default/serviceaccounts
	I0429 13:11:07.679506   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:07.679506   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:07.679506   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:07.687301   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:11:07.687301   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:07.687301   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:07.687301   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:07.687301   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:07.687301   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:07.687301   14008 round_trippers.go:580]     Content-Length: 262
	I0429 13:11:07.687301   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:07 GMT
	I0429 13:11:07.687301   14008 round_trippers.go:580]     Audit-Id: 62d1e99d-c2f3-4609-b041-9dff5a486a55
	I0429 13:11:07.687301   14008 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1981"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1c200474-8705-40aa-8512-ec20a74a9ff0","resourceVersion":"323","creationTimestamp":"2024-04-29T12:44:46Z"}}]}
	I0429 13:11:07.687301   14008 default_sa.go:45] found service account: "default"
	I0429 13:11:07.687301   14008 default_sa.go:55] duration metric: took 193.5049ms for default service account to be created ...
	I0429 13:11:07.687301   14008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 13:11:07.885946   14008 request.go:629] Waited for 198.471ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:11:07.886152   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:11:07.886152   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:07.886152   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:07.886152   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:07.893359   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:11:07.893475   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:07.893475   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:07.893475   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:07 GMT
	I0429 13:11:07.893475   14008 round_trippers.go:580]     Audit-Id: a1dc6265-af27-4d09-964f-6572b9695aa1
	I0429 13:11:07.893475   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:07.893475   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:07.893475   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:07.895074   14008 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1981"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1967","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86100 chars]
	I0429 13:11:07.899419   14008 system_pods.go:86] 12 kube-system pods found
	I0429 13:11:07.899419   14008 system_pods.go:89] "coredns-7db6d8ff4d-ctb8n" [1141a626-d4ac-4826-a912-7b7ed378b013] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "etcd-multinode-409200" [b9b6b993-c1c6-46c3-8d07-0a639619f279] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kindnet-7p265" [d6da7369-a131-4058-b9a2-4ee6e9ac8a4f] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kindnet-svw9w" [81d6ce68-e391-48d1-8246-3f7047ba52e2] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kindnet-xj48j" [adefd380-e946-47ff-b57c-3baa04e6f99c] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kube-apiserver-multinode-409200" [6b6a5200-5ddb-4315-be16-b0d86d36820f] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kube-controller-manager-multinode-409200" [bc75101f-63f2-4b41-a912-4d015c4fd4aa] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kube-proxy-bbxqg" [3c4f811c-336b-4038-b6ff-d62efffacd9b] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kube-proxy-g2jp8" [d2c926f8-0701-483c-84ae-295e7bb08fc9] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kube-proxy-lwc65" [98e18062-2d8f-45d3-a8fa-dda098365db8] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kube-scheduler-multinode-409200" [6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "storage-provisioner" [a200a31d-7fe5-4ebd-b4ea-f8ae593de3f9] Running
	I0429 13:11:07.899419   14008 system_pods.go:126] duration metric: took 212.1167ms to wait for k8s-apps to be running ...
	I0429 13:11:07.899419   14008 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 13:11:07.911383   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:11:07.952016   14008 system_svc.go:56] duration metric: took 52.5962ms WaitForService to wait for kubelet
	I0429 13:11:07.952070   14008 kubeadm.go:576] duration metric: took 17.6889824s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 13:11:07.952070   14008 node_conditions.go:102] verifying NodePressure condition ...
	I0429 13:11:08.089687   14008 request.go:629] Waited for 137.4332ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes
	I0429 13:11:08.089825   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes
	I0429 13:11:08.089825   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:08.089825   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:08.089884   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:08.099580   14008 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 13:11:08.099737   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:08.099737   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:08.099737   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:08 GMT
	I0429 13:11:08.099737   14008 round_trippers.go:580]     Audit-Id: 94fa5ce5-7333-489f-9172-24e1fe7734b6
	I0429 13:11:08.099809   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:08.099835   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:08.099835   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:08.100486   14008 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1983"},"items":[{"metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1982","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15418 chars]
	I0429 13:11:08.101709   14008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:11:08.101783   14008 node_conditions.go:123] node cpu capacity is 2
	I0429 13:11:08.101783   14008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:11:08.101783   14008 node_conditions.go:123] node cpu capacity is 2
	I0429 13:11:08.101783   14008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:11:08.101783   14008 node_conditions.go:123] node cpu capacity is 2
	I0429 13:11:08.101783   14008 node_conditions.go:105] duration metric: took 149.7119ms to run NodePressure ...
	I0429 13:11:08.101783   14008 start.go:240] waiting for startup goroutines ...
	I0429 13:11:08.101783   14008 start.go:245] waiting for cluster config update ...
	I0429 13:11:08.101783   14008 start.go:254] writing updated cluster config ...
	I0429 13:11:08.106053   14008 out.go:177] 
	I0429 13:11:08.110091   14008 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:11:08.114231   14008 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:11:08.114231   14008 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 13:11:08.123452   14008 out.go:177] * Starting "multinode-409200-m02" worker node in "multinode-409200" cluster
	I0429 13:11:08.126895   14008 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 13:11:08.127240   14008 cache.go:56] Caching tarball of preloaded images
	I0429 13:11:08.128014   14008 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 13:11:08.128142   14008 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 13:11:08.128552   14008 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 13:11:08.131167   14008 start.go:360] acquireMachinesLock for multinode-409200-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 13:11:08.131167   14008 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-409200-m02"
	I0429 13:11:08.131167   14008 start.go:96] Skipping create...Using existing machine configuration
	I0429 13:11:08.131167   14008 fix.go:54] fixHost starting: m02
	I0429 13:11:08.131832   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:11:10.293329   14008 main.go:141] libmachine: [stdout =====>] : Off
	
	I0429 13:11:10.293877   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:11:10.293877   14008 fix.go:112] recreateIfNeeded on multinode-409200-m02: state=Stopped err=<nil>
	W0429 13:11:10.293877   14008 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 13:11:10.300732   14008 out.go:177] * Restarting existing hyperv VM for "multinode-409200-m02" ...
	I0429 13:11:10.303796   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-409200-m02
	I0429 13:11:13.467416   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:11:13.467416   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:11:13.467416   14008 main.go:141] libmachine: Waiting for host to start...
	I0429 13:11:13.467416   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:11:15.764453   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:11:15.764453   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:11:15.764646   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 13:11:18.398789   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:11:18.398789   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:11:19.400754   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:11:21.613999   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:11:21.614090   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:11:21.614090   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-409200" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-409200
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-409200: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-409200" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-409200	172.26.185.116
multinode-409200-m02	172.26.183.208
multinode-409200-m03	172.26.181.104

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-409200 -n multinode-409200
E0429 13:11:27.490951    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-409200 -n multinode-409200: (12.307813s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 logs -n 25: (8.8972577s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-409200 cp testdata\cp-test.txt                                                                                 | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:56 UTC | 29 Apr 24 12:56 UTC |
	|         | multinode-409200-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:56 UTC | 29 Apr 24 12:56 UTC |
	|         | multinode-409200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp multinode-409200-m02:/home/docker/cp-test.txt                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:56 UTC | 29 Apr 24 12:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2311671446\001\cp-test_multinode-409200-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:56 UTC | 29 Apr 24 12:56 UTC |
	|         | multinode-409200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp multinode-409200-m02:/home/docker/cp-test.txt                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:56 UTC | 29 Apr 24 12:57 UTC |
	|         | multinode-409200:/home/docker/cp-test_multinode-409200-m02_multinode-409200.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 12:57 UTC |
	|         | multinode-409200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n multinode-409200 sudo cat                                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-409200-m02_multinode-409200.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp multinode-409200-m02:/home/docker/cp-test.txt                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 12:57 UTC |
	|         | multinode-409200-m03:/home/docker/cp-test_multinode-409200-m02_multinode-409200-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 12:57 UTC |
	|         | multinode-409200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n multinode-409200-m03 sudo cat                                                                    | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 12:57 UTC |
	|         | /home/docker/cp-test_multinode-409200-m02_multinode-409200-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp testdata\cp-test.txt                                                                                 | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:57 UTC | 29 Apr 24 12:58 UTC |
	|         | multinode-409200-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:58 UTC | 29 Apr 24 12:58 UTC |
	|         | multinode-409200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp multinode-409200-m03:/home/docker/cp-test.txt                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:58 UTC | 29 Apr 24 12:58 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2311671446\001\cp-test_multinode-409200-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:58 UTC | 29 Apr 24 12:58 UTC |
	|         | multinode-409200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp multinode-409200-m03:/home/docker/cp-test.txt                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:58 UTC | 29 Apr 24 12:58 UTC |
	|         | multinode-409200:/home/docker/cp-test_multinode-409200-m03_multinode-409200.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:58 UTC | 29 Apr 24 12:59 UTC |
	|         | multinode-409200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n multinode-409200 sudo cat                                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:59 UTC | 29 Apr 24 12:59 UTC |
	|         | /home/docker/cp-test_multinode-409200-m03_multinode-409200.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-409200 cp multinode-409200-m03:/home/docker/cp-test.txt                                                        | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:59 UTC | 29 Apr 24 12:59 UTC |
	|         | multinode-409200-m02:/home/docker/cp-test_multinode-409200-m03_multinode-409200-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n                                                                                                  | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:59 UTC | 29 Apr 24 12:59 UTC |
	|         | multinode-409200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-409200 ssh -n multinode-409200-m02 sudo cat                                                                    | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:59 UTC | 29 Apr 24 12:59 UTC |
	|         | /home/docker/cp-test_multinode-409200-m03_multinode-409200-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-409200 node stop m03                                                                                           | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 12:59 UTC | 29 Apr 24 13:00 UTC |
	| node    | multinode-409200 node start                                                                                              | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 13:01 UTC |                     |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-409200                                                                                                 | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 13:05 UTC |                     |
	| stop    | -p multinode-409200                                                                                                      | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 13:05 UTC | 29 Apr 24 13:08 UTC |
	| start   | -p multinode-409200                                                                                                      | multinode-409200 | minikube6\jenkins | v1.33.0 | 29 Apr 24 13:08 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 13:08:25
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 13:08:25.761784   14008 out.go:291] Setting OutFile to fd 1560 ...
	I0429 13:08:25.761784   14008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:08:25.761784   14008 out.go:304] Setting ErrFile to fd 1592...
	I0429 13:08:25.761784   14008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:08:25.791908   14008 out.go:298] Setting JSON to false
	I0429 13:08:25.796369   14008 start.go:129] hostinfo: {"hostname":"minikube6","uptime":37578,"bootTime":1714358527,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 13:08:25.796369   14008 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 13:08:25.925859   14008 out.go:177] * [multinode-409200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 13:08:26.027798   14008 notify.go:220] Checking for updates...
	I0429 13:08:26.129954   14008 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 13:08:26.271046   14008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 13:08:26.413859   14008 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 13:08:26.635759   14008 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 13:08:26.770158   14008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 13:08:26.820621   14008 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:08:26.820621   14008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 13:08:32.684693   14008 out.go:177] * Using the hyperv driver based on existing profile
	I0429 13:08:32.784878   14008 start.go:297] selected driver: hyperv
	I0429 13:08:32.784878   14008 start.go:901] validating driver "hyperv" against &{Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.26.181.104 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:08:32.784878   14008 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 13:08:32.852889   14008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 13:08:32.853124   14008 cni.go:84] Creating CNI manager for ""
	I0429 13:08:32.853124   14008 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 13:08:32.853392   14008 start.go:340] cluster config:
	{Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.185.116 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.26.181.104 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:08:32.853753   14008 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 13:08:32.877592   14008 out.go:177] * Starting "multinode-409200" primary control-plane node in "multinode-409200" cluster
	I0429 13:08:32.959876   14008 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 13:08:32.960617   14008 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 13:08:32.960699   14008 cache.go:56] Caching tarball of preloaded images
	I0429 13:08:32.960986   14008 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 13:08:32.961281   14008 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 13:08:32.961705   14008 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 13:08:32.965509   14008 start.go:360] acquireMachinesLock for multinode-409200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 13:08:32.965834   14008 start.go:364] duration metric: took 93.9µs to acquireMachinesLock for "multinode-409200"
	I0429 13:08:32.965963   14008 start.go:96] Skipping create...Using existing machine configuration
	I0429 13:08:32.966056   14008 fix.go:54] fixHost starting: 
	I0429 13:08:32.966295   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:08:35.788000   14008 main.go:141] libmachine: [stdout =====>] : Off
	
	I0429 13:08:35.788000   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:35.788000   14008 fix.go:112] recreateIfNeeded on multinode-409200: state=Stopped err=<nil>
	W0429 13:08:35.788000   14008 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 13:08:35.795978   14008 out.go:177] * Restarting existing hyperv VM for "multinode-409200" ...
	I0429 13:08:35.798552   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-409200
	I0429 13:08:39.042010   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:08:39.042010   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:39.042194   14008 main.go:141] libmachine: Waiting for host to start...
	I0429 13:08:39.042251   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:08:41.382182   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:08:41.382182   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:41.382182   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:08:44.011346   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:08:44.011346   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:45.015181   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:08:47.322916   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:08:47.322916   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:47.323178   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:08:50.059132   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:08:50.059132   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:51.069218   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:08:53.360106   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:08:53.361130   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:53.361130   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:08:56.064919   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:08:56.065338   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:57.071277   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:08:59.340750   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:08:59.340750   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:08:59.340750   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:01.956175   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:09:01.956175   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:02.957308   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:05.219018   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:05.219018   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:05.219585   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:07.896792   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:07.896792   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:07.900478   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:10.111442   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:10.111442   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:10.111442   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:12.823053   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:12.823449   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:12.823724   14008 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 13:09:12.826581   14008 machine.go:94] provisionDockerMachine start ...
	I0429 13:09:12.826826   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:15.095780   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:15.095780   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:15.096632   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:17.746773   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:17.747601   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:17.753789   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:09:17.754442   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:09:17.754442   14008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 13:09:17.905063   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 13:09:17.905063   14008 buildroot.go:166] provisioning hostname "multinode-409200"
	I0429 13:09:17.905596   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:20.135330   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:20.135330   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:20.135930   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:22.816213   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:22.816213   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:22.823408   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:09:22.823601   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:09:22.823601   14008 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-409200 && echo "multinode-409200" | sudo tee /etc/hostname
	I0429 13:09:23.011604   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-409200
	
	I0429 13:09:23.011604   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:25.191924   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:25.193006   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:25.193122   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:27.891715   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:27.891715   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:27.897717   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:09:27.898303   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:09:27.898470   14008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-409200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-409200/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-409200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 13:09:28.063541   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 13:09:28.063541   14008 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0429 13:09:28.063541   14008 buildroot.go:174] setting up certificates
	I0429 13:09:28.063541   14008 provision.go:84] configureAuth start
	I0429 13:09:28.064075   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:30.271497   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:30.271497   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:30.272145   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:32.931304   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:32.931559   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:32.931661   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:35.138250   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:35.138954   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:35.138954   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:37.771701   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:37.772390   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:37.772390   14008 provision.go:143] copyHostCerts
	I0429 13:09:37.772619   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0429 13:09:37.772914   14008 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0429 13:09:37.772992   14008 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0429 13:09:37.773470   14008 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0429 13:09:37.774674   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0429 13:09:37.774940   14008 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0429 13:09:37.774940   14008 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0429 13:09:37.775350   14008 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0429 13:09:37.776466   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0429 13:09:37.776813   14008 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0429 13:09:37.776813   14008 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0429 13:09:37.776813   14008 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0429 13:09:37.777791   14008 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-409200 san=[127.0.0.1 172.26.179.21 localhost minikube multinode-409200]
	I0429 13:09:37.999208   14008 provision.go:177] copyRemoteCerts
	I0429 13:09:38.014017   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 13:09:38.014017   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:40.288292   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:40.288292   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:40.289423   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:42.972024   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:42.972783   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:42.973426   14008 sshutil.go:53] new ssh client: &{IP:172.26.179.21 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 13:09:43.106746   14008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0925688s)
	I0429 13:09:43.106746   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0429 13:09:43.107222   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 13:09:43.160595   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0429 13:09:43.161126   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 13:09:43.223841   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0429 13:09:43.223841   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 13:09:43.280305   14008 provision.go:87] duration metric: took 15.2160875s to configureAuth
	I0429 13:09:43.280404   14008 buildroot.go:189] setting minikube options for container-runtime
	I0429 13:09:43.281129   14008 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:09:43.281129   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:45.483597   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:45.484214   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:45.484214   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:48.174925   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:48.174925   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:48.182082   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:09:48.182082   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:09:48.182082   14008 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 13:09:48.326663   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 13:09:48.326663   14008 buildroot.go:70] root file system type: tmpfs
	I0429 13:09:48.326927   14008 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 13:09:48.326927   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:50.516696   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:50.517223   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:50.517317   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:53.203921   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:53.204493   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:53.212208   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:09:53.212835   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:09:53.212835   14008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 13:09:53.379334   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 13:09:53.379334   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:09:55.511866   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:09:55.511866   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:55.511866   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:09:58.139772   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:09:58.139772   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:09:58.146666   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:09:58.147237   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:09:58.147314   14008 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 13:10:00.796240   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 13:10:00.796240   14008 machine.go:97] duration metric: took 47.9692944s to provisionDockerMachine
	I0429 13:10:00.796351   14008 start.go:293] postStartSetup for "multinode-409200" (driver="hyperv")
	I0429 13:10:00.796351   14008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 13:10:00.810733   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 13:10:00.811698   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:10:02.973540   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:10:02.973540   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:02.974257   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:10:05.664930   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:10:05.664930   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:05.666232   14008 sshutil.go:53] new ssh client: &{IP:172.26.179.21 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 13:10:05.784286   14008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9725501s)
	I0429 13:10:05.801660   14008 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 13:10:05.810254   14008 command_runner.go:130] > NAME=Buildroot
	I0429 13:10:05.810254   14008 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 13:10:05.810254   14008 command_runner.go:130] > ID=buildroot
	I0429 13:10:05.810254   14008 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 13:10:05.810254   14008 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 13:10:05.810254   14008 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 13:10:05.810537   14008 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0429 13:10:05.811074   14008 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0429 13:10:05.813448   14008 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> 84962.pem in /etc/ssl/certs
	I0429 13:10:05.813448   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /etc/ssl/certs/84962.pem
	I0429 13:10:05.829733   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 13:10:05.853670   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /etc/ssl/certs/84962.pem (1708 bytes)
	I0429 13:10:05.912063   14008 start.go:296] duration metric: took 5.1156729s for postStartSetup
	I0429 13:10:05.912196   14008 fix.go:56] duration metric: took 1m32.9455259s for fixHost
	I0429 13:10:05.912312   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:10:08.096551   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:10:08.096551   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:08.096551   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:10:10.747445   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:10:10.747445   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:10.757920   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:10:10.757920   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:10:10.757920   14008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 13:10:10.912573   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714396210.915243025
	
	I0429 13:10:10.912573   14008 fix.go:216] guest clock: 1714396210.915243025
	I0429 13:10:10.912573   14008 fix.go:229] Guest: 2024-04-29 13:10:10.915243025 +0000 UTC Remote: 2024-04-29 13:10:05.912239 +0000 UTC m=+100.367905601 (delta=5.003004025s)
	I0429 13:10:10.912797   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:10:13.084036   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:10:13.084036   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:13.084853   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:10:15.775768   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:10:15.776165   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:15.782500   14008 main.go:141] libmachine: Using SSH client type: native
	I0429 13:10:15.782641   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb6a1c0] 0xb6cda0 <nil>  [] 0s} 172.26.179.21 22 <nil> <nil>}
	I0429 13:10:15.782641   14008 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714396210
	I0429 13:10:15.945111   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr 29 13:10:10 UTC 2024
	
	I0429 13:10:15.945111   14008 fix.go:236] clock set: Mon Apr 29 13:10:10 UTC 2024
	 (err=<nil>)
	I0429 13:10:15.945111   14008 start.go:83] releasing machines lock for "multinode-409200", held for 1m42.9784947s
	I0429 13:10:15.945111   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:10:18.153880   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:10:18.154498   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:18.154498   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:10:20.781121   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:10:20.781121   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:20.787293   14008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 13:10:20.787402   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:10:20.797785   14008 ssh_runner.go:195] Run: cat /version.json
	I0429 13:10:20.797785   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:10:23.030229   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:10:23.030229   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:23.030229   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:10:23.041925   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:10:23.041925   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:23.041925   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:10:25.805087   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:10:25.805087   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:25.805636   14008 sshutil.go:53] new ssh client: &{IP:172.26.179.21 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 13:10:25.834229   14008 main.go:141] libmachine: [stdout =====>] : 172.26.179.21
	
	I0429 13:10:25.834513   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:10:25.834698   14008 sshutil.go:53] new ssh client: &{IP:172.26.179.21 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 13:10:25.912883   14008 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 13:10:25.912883   14008 ssh_runner.go:235] Completed: cat /version.json: (5.1150588s)
	I0429 13:10:25.926120   14008 ssh_runner.go:195] Run: systemctl --version
	I0429 13:10:26.038845   14008 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 13:10:26.038936   14008 command_runner.go:130] > systemd 252 (252)
	I0429 13:10:26.039017   14008 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 13:10:26.039111   14008 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2517537s)
	I0429 13:10:26.052643   14008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 13:10:26.061649   14008 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 13:10:26.062700   14008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 13:10:26.078783   14008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 13:10:26.111328   14008 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0429 13:10:26.111328   14008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 13:10:26.111328   14008 start.go:494] detecting cgroup driver to use...
	I0429 13:10:26.111328   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:10:26.146411   14008 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0429 13:10:26.162573   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 13:10:26.201194   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 13:10:26.225029   14008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 13:10:26.239018   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 13:10:26.273983   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 13:10:26.311530   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 13:10:26.350470   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 13:10:26.387122   14008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 13:10:26.421028   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 13:10:26.458166   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 13:10:26.493411   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 13:10:26.528887   14008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 13:10:26.549089   14008 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 13:10:26.564803   14008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 13:10:26.600072   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:10:26.834923   14008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 13:10:26.881179   14008 start.go:494] detecting cgroup driver to use...
	I0429 13:10:26.896666   14008 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 13:10:26.930303   14008 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0429 13:10:26.930303   14008 command_runner.go:130] > [Unit]
	I0429 13:10:26.930303   14008 command_runner.go:130] > Description=Docker Application Container Engine
	I0429 13:10:26.930303   14008 command_runner.go:130] > Documentation=https://docs.docker.com
	I0429 13:10:26.930303   14008 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0429 13:10:26.930303   14008 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0429 13:10:26.930303   14008 command_runner.go:130] > StartLimitBurst=3
	I0429 13:10:26.930303   14008 command_runner.go:130] > StartLimitIntervalSec=60
	I0429 13:10:26.930303   14008 command_runner.go:130] > [Service]
	I0429 13:10:26.930303   14008 command_runner.go:130] > Type=notify
	I0429 13:10:26.930303   14008 command_runner.go:130] > Restart=on-failure
	I0429 13:10:26.930303   14008 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0429 13:10:26.930303   14008 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0429 13:10:26.930303   14008 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0429 13:10:26.930303   14008 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0429 13:10:26.930303   14008 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0429 13:10:26.930303   14008 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0429 13:10:26.930303   14008 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0429 13:10:26.930931   14008 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0429 13:10:26.930931   14008 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0429 13:10:26.931014   14008 command_runner.go:130] > ExecStart=
	I0429 13:10:26.931069   14008 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0429 13:10:26.931167   14008 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0429 13:10:26.931254   14008 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0429 13:10:26.931254   14008 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0429 13:10:26.931324   14008 command_runner.go:130] > LimitNOFILE=infinity
	I0429 13:10:26.931324   14008 command_runner.go:130] > LimitNPROC=infinity
	I0429 13:10:26.931324   14008 command_runner.go:130] > LimitCORE=infinity
	I0429 13:10:26.931390   14008 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0429 13:10:26.931390   14008 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0429 13:10:26.931451   14008 command_runner.go:130] > TasksMax=infinity
	I0429 13:10:26.931451   14008 command_runner.go:130] > TimeoutStartSec=0
	I0429 13:10:26.931451   14008 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0429 13:10:26.931518   14008 command_runner.go:130] > Delegate=yes
	I0429 13:10:26.931518   14008 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0429 13:10:26.931518   14008 command_runner.go:130] > KillMode=process
	I0429 13:10:26.931579   14008 command_runner.go:130] > [Install]
	I0429 13:10:26.931579   14008 command_runner.go:130] > WantedBy=multi-user.target
	I0429 13:10:26.948663   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:10:26.994506   14008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 13:10:27.046758   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 13:10:27.094029   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 13:10:27.139164   14008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 13:10:27.216717   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 13:10:27.247422   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 13:10:27.291788   14008 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0429 13:10:27.306669   14008 ssh_runner.go:195] Run: which cri-dockerd
	I0429 13:10:27.314373   14008 command_runner.go:130] > /usr/bin/cri-dockerd
	I0429 13:10:27.328161   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 13:10:27.349063   14008 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 13:10:27.403749   14008 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 13:10:27.634501   14008 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 13:10:27.859745   14008 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 13:10:27.860077   14008 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 13:10:27.910023   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:10:28.139535   14008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 13:10:30.902242   14008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7626857s)
	I0429 13:10:30.916749   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 13:10:30.959921   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 13:10:30.997388   14008 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 13:10:31.238328   14008 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 13:10:31.467162   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:10:31.697642   14008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 13:10:31.743692   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 13:10:31.782725   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:10:32.005975   14008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 13:10:32.140481   14008 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 13:10:32.154151   14008 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 13:10:32.164090   14008 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0429 13:10:32.164090   14008 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 13:10:32.164090   14008 command_runner.go:130] > Device: 0,22	Inode: 844         Links: 1
	I0429 13:10:32.164090   14008 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0429 13:10:32.164090   14008 command_runner.go:130] > Access: 2024-04-29 13:10:32.042556180 +0000
	I0429 13:10:32.164090   14008 command_runner.go:130] > Modify: 2024-04-29 13:10:32.042556180 +0000
	I0429 13:10:32.164090   14008 command_runner.go:130] > Change: 2024-04-29 13:10:32.047556176 +0000
	I0429 13:10:32.164090   14008 command_runner.go:130] >  Birth: -
	I0429 13:10:32.164090   14008 start.go:562] Will wait 60s for crictl version
	I0429 13:10:32.176665   14008 ssh_runner.go:195] Run: which crictl
	I0429 13:10:32.182667   14008 command_runner.go:130] > /usr/bin/crictl
	I0429 13:10:32.197313   14008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 13:10:32.261405   14008 command_runner.go:130] > Version:  0.1.0
	I0429 13:10:32.261405   14008 command_runner.go:130] > RuntimeName:  docker
	I0429 13:10:32.261405   14008 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0429 13:10:32.261405   14008 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 13:10:32.261405   14008 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 13:10:32.270999   14008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 13:10:32.307247   14008 command_runner.go:130] > 26.0.2
	I0429 13:10:32.319421   14008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 13:10:32.356661   14008 command_runner.go:130] > 26.0.2
	I0429 13:10:32.362190   14008 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 13:10:32.362602   14008 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0429 13:10:32.367164   14008 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0429 13:10:32.367742   14008 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0429 13:10:32.367742   14008 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0429 13:10:32.367742   14008 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:da:8e:53 Flags:up|broadcast|multicast|running}
	I0429 13:10:32.371536   14008 ip.go:210] interface addr: fe80::e4d4:6d70:21fb:68f3/64
	I0429 13:10:32.371565   14008 ip.go:210] interface addr: 172.26.176.1/20
	I0429 13:10:32.383828   14008 ssh_runner.go:195] Run: grep 172.26.176.1	host.minikube.internal$ /etc/hosts
	I0429 13:10:32.392642   14008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:10:32.419538   14008 kubeadm.go:877] updating cluster {Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.21 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.26.181.104 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 13:10:32.419782   14008 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 13:10:32.430909   14008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 13:10:32.458204   14008 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:10:32.458565   14008 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:10:32.458565   14008 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:10:32.458565   14008 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:10:32.458565   14008 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 13:10:32.458565   14008 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0429 13:10:32.458565   14008 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:10:32.458565   14008 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 13:10:32.458565   14008 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:10:32.458565   14008 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0429 13:10:32.458848   14008 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0429 13:10:32.458848   14008 docker.go:615] Images already preloaded, skipping extraction
	I0429 13:10:32.472906   14008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 13:10:32.497667   14008 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0429 13:10:32.497667   14008 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0429 13:10:32.497667   14008 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 13:10:32.497667   14008 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0429 13:10:32.497667   14008 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0429 13:10:32.497751   14008 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0429 13:10:32.497751   14008 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0429 13:10:32.497751   14008 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0429 13:10:32.497751   14008 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 13:10:32.497751   14008 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0429 13:10:32.497879   14008 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0429 13:10:32.498004   14008 cache_images.go:84] Images are preloaded, skipping loading
	I0429 13:10:32.498004   14008 kubeadm.go:928] updating node { 172.26.179.21 8443 v1.30.0 docker true true} ...
	I0429 13:10:32.498150   14008 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-409200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.179.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 13:10:32.509222   14008 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 13:10:32.543139   14008 command_runner.go:130] > cgroupfs
	I0429 13:10:32.543341   14008 cni.go:84] Creating CNI manager for ""
	I0429 13:10:32.543341   14008 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 13:10:32.543341   14008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 13:10:32.543341   14008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.179.21 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-409200 NodeName:multinode-409200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.179.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.179.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 13:10:32.543341   14008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.179.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-409200"
	  kubeletExtraArgs:
	    node-ip: 172.26.179.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.179.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 13:10:32.558249   14008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 13:10:32.578764   14008 command_runner.go:130] > kubeadm
	I0429 13:10:32.578764   14008 command_runner.go:130] > kubectl
	I0429 13:10:32.578764   14008 command_runner.go:130] > kubelet
	I0429 13:10:32.578764   14008 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 13:10:32.593717   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 13:10:32.618298   14008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 13:10:32.654208   14008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 13:10:32.691118   14008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 13:10:32.749763   14008 ssh_runner.go:195] Run: grep 172.26.179.21	control-plane.minikube.internal$ /etc/hosts
	I0429 13:10:32.756903   14008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.179.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 13:10:32.794045   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:10:33.010838   14008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:10:33.043306   14008 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200 for IP: 172.26.179.21
	I0429 13:10:33.043306   14008 certs.go:194] generating shared ca certs ...
	I0429 13:10:33.043404   14008 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:10:33.044260   14008 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0429 13:10:33.044594   14008 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0429 13:10:33.044777   14008 certs.go:256] generating profile certs ...
	I0429 13:10:33.045613   14008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\client.key
	I0429 13:10:33.045774   14008 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.2dc65918
	I0429 13:10:33.045835   14008 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.2dc65918 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.26.179.21]
	I0429 13:10:33.772814   14008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.2dc65918 ...
	I0429 13:10:33.772814   14008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.2dc65918: {Name:mkc683afb0b6b1567608b8dec0da29a4359533c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:10:33.774811   14008 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.2dc65918 ...
	I0429 13:10:33.774811   14008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.2dc65918: {Name:mk75928da1c49eef78614e437525c498adb354d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:10:33.775207   14008 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt.2dc65918 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt
	I0429 13:10:33.790283   14008 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key.2dc65918 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key
	I0429 13:10:33.792365   14008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key
	I0429 13:10:33.792465   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 13:10:33.792674   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0429 13:10:33.793000   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 13:10:33.793362   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 13:10:33.793667   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 13:10:33.793971   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 13:10:33.794200   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 13:10:33.794452   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 13:10:33.795479   14008 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem (1338 bytes)
	W0429 13:10:33.795963   14008 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496_empty.pem, impossibly tiny 0 bytes
	I0429 13:10:33.796141   14008 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0429 13:10:33.796570   14008 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0429 13:10:33.796858   14008 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0429 13:10:33.797230   14008 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0429 13:10:33.797621   14008 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem (1708 bytes)
	I0429 13:10:33.797956   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem -> /usr/share/ca-certificates/84962.pem
	I0429 13:10:33.798205   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:10:33.798428   14008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem -> /usr/share/ca-certificates/8496.pem
	I0429 13:10:33.799665   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 13:10:33.853915   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 13:10:33.907546   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 13:10:33.957137   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 13:10:34.012901   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 13:10:34.071279   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 13:10:34.134340   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 13:10:34.187889   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 13:10:34.243370   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\84962.pem --> /usr/share/ca-certificates/84962.pem (1708 bytes)
	I0429 13:10:34.314118   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 13:10:34.363407   14008 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8496.pem --> /usr/share/ca-certificates/8496.pem (1338 bytes)
	I0429 13:10:34.415530   14008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 13:10:34.472214   14008 ssh_runner.go:195] Run: openssl version
	I0429 13:10:34.481400   14008 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 13:10:34.495705   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84962.pem && ln -fs /usr/share/ca-certificates/84962.pem /etc/ssl/certs/84962.pem"
	I0429 13:10:34.532066   14008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84962.pem
	I0429 13:10:34.539647   14008 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 13:10:34.539767   14008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 10:57 /usr/share/ca-certificates/84962.pem
	I0429 13:10:34.552197   14008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84962.pem
	I0429 13:10:34.562004   14008 command_runner.go:130] > 3ec20f2e
	I0429 13:10:34.577762   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/84962.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 13:10:34.613665   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 13:10:34.646810   14008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:10:34.654603   14008 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:10:34.654603   14008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 10:42 /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:10:34.669219   14008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 13:10:34.679324   14008 command_runner.go:130] > b5213941
	I0429 13:10:34.691573   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 13:10:34.729431   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8496.pem && ln -fs /usr/share/ca-certificates/8496.pem /etc/ssl/certs/8496.pem"
	I0429 13:10:34.773909   14008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8496.pem
	I0429 13:10:34.782813   14008 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 13:10:34.782813   14008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 10:57 /usr/share/ca-certificates/8496.pem
	I0429 13:10:34.798778   14008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8496.pem
	I0429 13:10:34.809208   14008 command_runner.go:130] > 51391683
	I0429 13:10:34.823093   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8496.pem /etc/ssl/certs/51391683.0"
	I0429 13:10:34.858902   14008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:10:34.866824   14008 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 13:10:34.866824   14008 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0429 13:10:34.866824   14008 command_runner.go:130] > Device: 8,1	Inode: 4196178     Links: 1
	I0429 13:10:34.866824   14008 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 13:10:34.866936   14008 command_runner.go:130] > Access: 2024-04-29 12:44:20.371014084 +0000
	I0429 13:10:34.866984   14008 command_runner.go:130] > Modify: 2024-04-29 12:44:20.371014084 +0000
	I0429 13:10:34.866984   14008 command_runner.go:130] > Change: 2024-04-29 12:44:20.371014084 +0000
	I0429 13:10:34.866984   14008 command_runner.go:130] >  Birth: 2024-04-29 12:44:20.371014084 +0000
	I0429 13:10:34.880826   14008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 13:10:34.890997   14008 command_runner.go:130] > Certificate will not expire
	I0429 13:10:34.905146   14008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 13:10:34.918040   14008 command_runner.go:130] > Certificate will not expire
	I0429 13:10:34.934613   14008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 13:10:34.946534   14008 command_runner.go:130] > Certificate will not expire
	I0429 13:10:34.961633   14008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 13:10:34.971329   14008 command_runner.go:130] > Certificate will not expire
	I0429 13:10:34.988042   14008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 13:10:34.997631   14008 command_runner.go:130] > Certificate will not expire
	I0429 13:10:35.012167   14008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 13:10:35.025130   14008 command_runner.go:130] > Certificate will not expire
	I0429 13:10:35.025696   14008 kubeadm.go:391] StartCluster: {Name:multinode-409200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-409200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.26.179.21 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.183.208 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.26.181.104 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 13:10:35.039648   14008 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 13:10:35.079237   14008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 13:10:35.101169   14008 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0429 13:10:35.101169   14008 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0429 13:10:35.101169   14008 command_runner.go:130] > /var/lib/minikube/etcd:
	I0429 13:10:35.101169   14008 command_runner.go:130] > member
	W0429 13:10:35.101169   14008 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 13:10:35.101169   14008 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 13:10:35.101169   14008 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 13:10:35.114340   14008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 13:10:35.134421   14008 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 13:10:35.135942   14008 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-409200" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 13:10:35.136677   14008 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-409200" cluster setting kubeconfig missing "multinode-409200" context setting]
	I0429 13:10:35.137432   14008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:10:35.154685   14008 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 13:10:35.155482   14008 kapi.go:59] client config for multinode-409200: &rest.Config{Host:"https://172.26.179.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-409200/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2015ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 13:10:35.156905   14008 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 13:10:35.169839   14008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 13:10:35.192119   14008 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0429 13:10:35.192119   14008 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0429 13:10:35.192119   14008 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0429 13:10:35.192119   14008 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0429 13:10:35.192119   14008 command_runner.go:130] >  kind: InitConfiguration
	I0429 13:10:35.192119   14008 command_runner.go:130] >  localAPIEndpoint:
	I0429 13:10:35.192119   14008 command_runner.go:130] > -  advertiseAddress: 172.26.185.116
	I0429 13:10:35.192119   14008 command_runner.go:130] > +  advertiseAddress: 172.26.179.21
	I0429 13:10:35.192119   14008 command_runner.go:130] >    bindPort: 8443
	I0429 13:10:35.192119   14008 command_runner.go:130] >  bootstrapTokens:
	I0429 13:10:35.192119   14008 command_runner.go:130] >    - groups:
	I0429 13:10:35.192119   14008 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0429 13:10:35.192119   14008 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0429 13:10:35.192119   14008 command_runner.go:130] >    name: "multinode-409200"
	I0429 13:10:35.192119   14008 command_runner.go:130] >    kubeletExtraArgs:
	I0429 13:10:35.192119   14008 command_runner.go:130] > -    node-ip: 172.26.185.116
	I0429 13:10:35.192119   14008 command_runner.go:130] > +    node-ip: 172.26.179.21
	I0429 13:10:35.192119   14008 command_runner.go:130] >    taints: []
	I0429 13:10:35.192119   14008 command_runner.go:130] >  ---
	I0429 13:10:35.192119   14008 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0429 13:10:35.192119   14008 command_runner.go:130] >  kind: ClusterConfiguration
	I0429 13:10:35.192119   14008 command_runner.go:130] >  apiServer:
	I0429 13:10:35.192119   14008 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.26.185.116"]
	I0429 13:10:35.192119   14008 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.26.179.21"]
	I0429 13:10:35.192119   14008 command_runner.go:130] >    extraArgs:
	I0429 13:10:35.192119   14008 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0429 13:10:35.192119   14008 command_runner.go:130] >  controllerManager:
	I0429 13:10:35.192119   14008 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.26.185.116
	+  advertiseAddress: 172.26.179.21
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-409200"
	   kubeletExtraArgs:
	-    node-ip: 172.26.185.116
	+    node-ip: 172.26.179.21
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.26.185.116"]
	+  certSANs: ["127.0.0.1", "localhost", "172.26.179.21"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0429 13:10:35.192119   14008 kubeadm.go:1154] stopping kube-system containers ...
	I0429 13:10:35.203446   14008 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 13:10:35.238894   14008 command_runner.go:130] > 98ab9c7d6885
	I0429 13:10:35.238945   14008 command_runner.go:130] > 5a03c0724371
	I0429 13:10:35.238945   14008 command_runner.go:130] > ea71df709887
	I0429 13:10:35.238945   14008 command_runner.go:130] > ba73c7e4d62c
	I0429 13:10:35.239038   14008 command_runner.go:130] > caeb8f4bcea1
	I0429 13:10:35.239038   14008 command_runner.go:130] > 3ba8caba4bc5
	I0429 13:10:35.239038   14008 command_runner.go:130] > 3792c8bbb983
	I0429 13:10:35.239038   14008 command_runner.go:130] > 2d26cd85561d
	I0429 13:10:35.239038   14008 command_runner.go:130] > 315326a1ce10
	I0429 13:10:35.239038   14008 command_runner.go:130] > 390664a85913
	I0429 13:10:35.239038   14008 command_runner.go:130] > 5adb6a9084e4
	I0429 13:10:35.239038   14008 command_runner.go:130] > 030b6d42f50f
	I0429 13:10:35.239038   14008 command_runner.go:130] > 19fd9c3dddd4
	I0429 13:10:35.239038   14008 command_runner.go:130] > 85aab37150a1
	I0429 13:10:35.239133   14008 command_runner.go:130] > c88537851c01
	I0429 13:10:35.239133   14008 command_runner.go:130] > 5d39391ba43b
	I0429 13:10:35.239210   14008 docker.go:483] Stopping containers: [98ab9c7d6885 5a03c0724371 ea71df709887 ba73c7e4d62c caeb8f4bcea1 3ba8caba4bc5 3792c8bbb983 2d26cd85561d 315326a1ce10 390664a85913 5adb6a9084e4 030b6d42f50f 19fd9c3dddd4 85aab37150a1 c88537851c01 5d39391ba43b]
	I0429 13:10:35.250508   14008 ssh_runner.go:195] Run: docker stop 98ab9c7d6885 5a03c0724371 ea71df709887 ba73c7e4d62c caeb8f4bcea1 3ba8caba4bc5 3792c8bbb983 2d26cd85561d 315326a1ce10 390664a85913 5adb6a9084e4 030b6d42f50f 19fd9c3dddd4 85aab37150a1 c88537851c01 5d39391ba43b
	I0429 13:10:35.284989   14008 command_runner.go:130] > 98ab9c7d6885
	I0429 13:10:35.284989   14008 command_runner.go:130] > 5a03c0724371
	I0429 13:10:35.284989   14008 command_runner.go:130] > ea71df709887
	I0429 13:10:35.284989   14008 command_runner.go:130] > ba73c7e4d62c
	I0429 13:10:35.284989   14008 command_runner.go:130] > caeb8f4bcea1
	I0429 13:10:35.284989   14008 command_runner.go:130] > 3ba8caba4bc5
	I0429 13:10:35.284989   14008 command_runner.go:130] > 3792c8bbb983
	I0429 13:10:35.284989   14008 command_runner.go:130] > 2d26cd85561d
	I0429 13:10:35.284989   14008 command_runner.go:130] > 315326a1ce10
	I0429 13:10:35.285162   14008 command_runner.go:130] > 390664a85913
	I0429 13:10:35.285162   14008 command_runner.go:130] > 5adb6a9084e4
	I0429 13:10:35.285162   14008 command_runner.go:130] > 030b6d42f50f
	I0429 13:10:35.285162   14008 command_runner.go:130] > 19fd9c3dddd4
	I0429 13:10:35.285162   14008 command_runner.go:130] > 85aab37150a1
	I0429 13:10:35.285162   14008 command_runner.go:130] > c88537851c01
	I0429 13:10:35.285162   14008 command_runner.go:130] > 5d39391ba43b
	I0429 13:10:35.303987   14008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 13:10:35.352160   14008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 13:10:35.372667   14008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0429 13:10:35.372667   14008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0429 13:10:35.373122   14008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0429 13:10:35.373122   14008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 13:10:35.373170   14008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 13:10:35.373170   14008 kubeadm.go:156] found existing configuration files:
	
	I0429 13:10:35.388941   14008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 13:10:35.410202   14008 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 13:10:35.411187   14008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 13:10:35.425094   14008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 13:10:35.471107   14008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 13:10:35.491287   14008 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 13:10:35.491389   14008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 13:10:35.504136   14008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 13:10:35.545372   14008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 13:10:35.564523   14008 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 13:10:35.565036   14008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 13:10:35.579140   14008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 13:10:35.612019   14008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 13:10:35.630364   14008 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 13:10:35.631398   14008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 13:10:35.644560   14008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 13:10:35.687811   14008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 13:10:35.717674   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 13:10:36.024096   14008 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0429 13:10:36.024162   14008 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0429 13:10:36.024363   14008 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 13:10:36.024363   14008 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 13:10:36.024363   14008 command_runner.go:130] > [certs] Using the existing "sa" key
	I0429 13:10:36.024363   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 13:10:38.048832   14008 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 13:10:38.048832   14008 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 13:10:38.048832   14008 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 13:10:38.048832   14008 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 13:10:38.048832   14008 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 13:10:38.048832   14008 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 13:10:38.048832   14008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.0243867s)
	I0429 13:10:38.048832   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 13:10:38.406873   14008 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 13:10:38.406873   14008 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 13:10:38.406873   14008 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0429 13:10:38.406873   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 13:10:38.518414   14008 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 13:10:38.518414   14008 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 13:10:38.518414   14008 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 13:10:38.518414   14008 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 13:10:38.518414   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 13:10:38.669414   14008 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 13:10:38.669414   14008 api_server.go:52] waiting for apiserver process to appear ...
	I0429 13:10:38.681426   14008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:10:39.190142   14008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:10:39.684892   14008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:10:40.196352   14008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:10:40.695154   14008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:10:40.729155   14008 command_runner.go:130] > 1888
	I0429 13:10:40.729853   14008 api_server.go:72] duration metric: took 2.060423s to wait for apiserver process to appear ...
	I0429 13:10:40.729853   14008 api_server.go:88] waiting for apiserver healthz status ...
	I0429 13:10:40.729960   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:44.636826   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 13:10:44.636826   14008 api_server.go:103] status: https://172.26.179.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 13:10:44.637625   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:44.662165   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 13:10:44.662621   14008 api_server.go:103] status: https://172.26.179.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 13:10:44.740344   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:44.756261   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 13:10:44.756261   14008 api_server.go:103] status: https://172.26.179.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 13:10:45.235230   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:45.245177   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 13:10:45.245177   14008 api_server.go:103] status: https://172.26.179.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 13:10:45.738839   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:45.774814   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 13:10:45.774814   14008 api_server.go:103] status: https://172.26.179.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 13:10:46.232514   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:46.249468   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 13:10:46.249468   14008 api_server.go:103] status: https://172.26.179.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 13:10:46.741217   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:10:46.761123   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 200:
	ok
	I0429 13:10:46.761123   14008 round_trippers.go:463] GET https://172.26.179.21:8443/version
	I0429 13:10:46.761123   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:46.761123   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:46.761123   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:46.775263   14008 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 13:10:46.775894   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:46.775894   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:46 GMT
	I0429 13:10:46.775894   14008 round_trippers.go:580]     Audit-Id: b5207000-30d0-494b-a060-a21331af6886
	I0429 13:10:46.775894   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:46.775963   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:46.775963   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:46.775963   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:46.775963   14008 round_trippers.go:580]     Content-Length: 263
	I0429 13:10:46.775963   14008 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 13:10:46.776171   14008 api_server.go:141] control plane version: v1.30.0
	I0429 13:10:46.776230   14008 api_server.go:131] duration metric: took 6.0463314s to wait for apiserver health ...
	I0429 13:10:46.776288   14008 cni.go:84] Creating CNI manager for ""
	I0429 13:10:46.776288   14008 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 13:10:46.778656   14008 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 13:10:46.794985   14008 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 13:10:46.809433   14008 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0429 13:10:46.809433   14008 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0429 13:10:46.809433   14008 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0429 13:10:46.809433   14008 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 13:10:46.809433   14008 command_runner.go:130] > Access: 2024-04-29 13:09:07.025164922 +0000
	I0429 13:10:46.809433   14008 command_runner.go:130] > Modify: 2024-04-22 03:58:11.000000000 +0000
	I0429 13:10:46.809433   14008 command_runner.go:130] > Change: 2024-04-29 13:08:56.914000000 +0000
	I0429 13:10:46.809433   14008 command_runner.go:130] >  Birth: -
	I0429 13:10:46.809433   14008 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 13:10:46.809433   14008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 13:10:46.892706   14008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 13:10:48.121191   14008 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0429 13:10:48.121306   14008 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0429 13:10:48.121306   14008 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0429 13:10:48.121306   14008 command_runner.go:130] > daemonset.apps/kindnet configured
	I0429 13:10:48.121619   14008 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.2289039s)
	I0429 13:10:48.121619   14008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 13:10:48.121619   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:10:48.122165   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.122386   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.122518   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.130235   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:10:48.130235   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.130235   14008 round_trippers.go:580]     Audit-Id: 3f664a24-4c2f-49a1-b7a7-a32a9b6e3357
	I0429 13:10:48.130235   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.130235   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.130235   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.130235   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.131224   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.133924   14008 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1913"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1885","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87720 chars]
	I0429 13:10:48.141224   14008 system_pods.go:59] 12 kube-system pods found
	I0429 13:10:48.141224   14008 system_pods.go:61] "coredns-7db6d8ff4d-ctb8n" [1141a626-d4ac-4826-a912-7b7ed378b013] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 13:10:48.141224   14008 system_pods.go:61] "etcd-multinode-409200" [b9b6b993-c1c6-46c3-8d07-0a639619f279] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 13:10:48.141224   14008 system_pods.go:61] "kindnet-7p265" [d6da7369-a131-4058-b9a2-4ee6e9ac8a4f] Running
	I0429 13:10:48.141224   14008 system_pods.go:61] "kindnet-svw9w" [81d6ce68-e391-48d1-8246-3f7047ba52e2] Running
	I0429 13:10:48.141224   14008 system_pods.go:61] "kindnet-xj48j" [adefd380-e946-47ff-b57c-3baa04e6f99c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0429 13:10:48.141224   14008 system_pods.go:61] "kube-apiserver-multinode-409200" [6b6a5200-5ddb-4315-be16-b0d86d36820f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 13:10:48.141224   14008 system_pods.go:61] "kube-controller-manager-multinode-409200" [bc75101f-63f2-4b41-a912-4d015c4fd4aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 13:10:48.141224   14008 system_pods.go:61] "kube-proxy-bbxqg" [3c4f811c-336b-4038-b6ff-d62efffacd9b] Running
	I0429 13:10:48.141224   14008 system_pods.go:61] "kube-proxy-g2jp8" [d2c926f8-0701-483c-84ae-295e7bb08fc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0429 13:10:48.141224   14008 system_pods.go:61] "kube-proxy-lwc65" [98e18062-2d8f-45d3-a8fa-dda098365db8] Running
	I0429 13:10:48.141224   14008 system_pods.go:61] "kube-scheduler-multinode-409200" [6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 13:10:48.141224   14008 system_pods.go:61] "storage-provisioner" [a200a31d-7fe5-4ebd-b4ea-f8ae593de3f9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0429 13:10:48.141224   14008 system_pods.go:74] duration metric: took 19.6047ms to wait for pod list to return data ...
	I0429 13:10:48.141224   14008 node_conditions.go:102] verifying NodePressure condition ...
	I0429 13:10:48.142240   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes
	I0429 13:10:48.142240   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.142240   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.142240   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.146250   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:48.146250   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.146250   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.146250   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.146250   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.146250   14008 round_trippers.go:580]     Audit-Id: cf7a0522-5ad0-4e7c-8eaf-2a6830f85f4c
	I0429 13:10:48.146250   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.146250   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.146250   14008 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1913"},"items":[{"metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15642 chars]
	I0429 13:10:48.146250   14008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:10:48.146250   14008 node_conditions.go:123] node cpu capacity is 2
	I0429 13:10:48.146250   14008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:10:48.146250   14008 node_conditions.go:123] node cpu capacity is 2
	I0429 13:10:48.146250   14008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:10:48.146250   14008 node_conditions.go:123] node cpu capacity is 2
	I0429 13:10:48.146250   14008 node_conditions.go:105] duration metric: took 5.0262ms to run NodePressure ...
	I0429 13:10:48.146250   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 13:10:48.620056   14008 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0429 13:10:48.620056   14008 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0429 13:10:48.620056   14008 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 13:10:48.620056   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0429 13:10:48.620056   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.620056   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.620056   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.627207   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:10:48.627266   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.627266   14008 round_trippers.go:580]     Audit-Id: ca83a831-000a-40ee-adc6-1d0ef2c54bde
	I0429 13:10:48.627266   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.627266   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.627266   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.627266   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.627266   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.627806   14008 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1918"},"items":[{"metadata":{"name":"etcd-multinode-409200","namespace":"kube-system","uid":"b9b6b993-c1c6-46c3-8d07-0a639619f279","resourceVersion":"1894","creationTimestamp":"2024-04-29T13:10:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.179.21:2379","kubernetes.io/config.hash":"e52a2c55f8d70a755b3b61d5b714d564","kubernetes.io/config.mirror":"e52a2c55f8d70a755b3b61d5b714d564","kubernetes.io/config.seen":"2024-04-29T13:10:38.679846779Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T13:10:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 30532 chars]
	I0429 13:10:48.630272   14008 kubeadm.go:733] kubelet initialised
	I0429 13:10:48.630331   14008 kubeadm.go:734] duration metric: took 10.2744ms waiting for restarted kubelet to initialise ...
	I0429 13:10:48.630420   14008 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:10:48.630578   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:10:48.630652   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.630652   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.630743   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.640861   14008 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 13:10:48.640861   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.640861   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.640861   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.640861   14008 round_trippers.go:580]     Audit-Id: bbe85283-9ebf-4d30-94f7-88f1348625f8
	I0429 13:10:48.640861   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.640861   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.640861   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.642443   14008 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1918"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1885","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87127 chars]
	I0429 13:10:48.646685   14008 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:48.646685   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 13:10:48.646685   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.646685   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.646685   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.649858   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:48.650445   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.650445   14008 round_trippers.go:580]     Audit-Id: de488676-6aca-4d9b-80b6-85dc7fdd2116
	I0429 13:10:48.650445   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.650445   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.650445   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.650445   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.650550   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.650773   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1885","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0429 13:10:48.651542   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:48.652274   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.652352   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.652352   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.654717   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:10:48.654717   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.654717   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.654717   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.655708   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.655708   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.655708   14008 round_trippers.go:580]     Audit-Id: 7adc1573-ea41-44a5-844a-53b7d89ca888
	I0429 13:10:48.655708   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.656018   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:48.656277   14008 pod_ready.go:97] node "multinode-409200" hosting pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.656277   14008 pod_ready.go:81] duration metric: took 9.5928ms for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:48.656277   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200" hosting pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.656277   14008 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:48.656277   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-409200
	I0429 13:10:48.656277   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.656277   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.656277   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.660017   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:48.660017   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.660017   14008 round_trippers.go:580]     Audit-Id: fe9d4865-509e-43c8-ae28-7f276d119e1e
	I0429 13:10:48.660017   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.660017   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.660017   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.660017   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.660017   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.661477   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-409200","namespace":"kube-system","uid":"b9b6b993-c1c6-46c3-8d07-0a639619f279","resourceVersion":"1894","creationTimestamp":"2024-04-29T13:10:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.179.21:2379","kubernetes.io/config.hash":"e52a2c55f8d70a755b3b61d5b714d564","kubernetes.io/config.mirror":"e52a2c55f8d70a755b3b61d5b714d564","kubernetes.io/config.seen":"2024-04-29T13:10:38.679846779Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T13:10:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0429 13:10:48.662269   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:48.663360   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.663360   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.663360   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.667241   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:48.667241   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.667241   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.667241   14008 round_trippers.go:580]     Audit-Id: 1aea0693-a68a-4329-9d76-ad1b5a3a2c21
	I0429 13:10:48.667241   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.667241   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.667241   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.667241   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.667241   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:48.667241   14008 pod_ready.go:97] node "multinode-409200" hosting pod "etcd-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.667241   14008 pod_ready.go:81] duration metric: took 10.9638ms for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:48.667241   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200" hosting pod "etcd-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.667241   14008 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:48.667241   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-409200
	I0429 13:10:48.667241   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.668255   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.668255   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.670266   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:10:48.670266   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.670266   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.670266   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.670266   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.670266   14008 round_trippers.go:580]     Audit-Id: ec6b3514-7584-4b2d-9e19-fe4062d24ff7
	I0429 13:10:48.670266   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.670266   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.671248   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-409200","namespace":"kube-system","uid":"6b6a5200-5ddb-4315-be16-b0d86d36820f","resourceVersion":"1890","creationTimestamp":"2024-04-29T13:10:45Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.179.21:8443","kubernetes.io/config.hash":"67a711354a194289dea1aee475e07833","kubernetes.io/config.mirror":"67a711354a194289dea1aee475e07833","kubernetes.io/config.seen":"2024-04-29T13:10:38.602845937Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T13:10:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7939 chars]
	I0429 13:10:48.671248   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:48.671248   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.671248   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.671248   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.675310   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:48.675310   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.675310   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.675310   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.675310   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.675310   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.675310   14008 round_trippers.go:580]     Audit-Id: f260a138-3a6a-47a2-b11b-7d8b6ab61109
	I0429 13:10:48.675310   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.678251   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:48.678251   14008 pod_ready.go:97] node "multinode-409200" hosting pod "kube-apiserver-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.678251   14008 pod_ready.go:81] duration metric: took 11.0094ms for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:48.678251   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200" hosting pod "kube-apiserver-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.678251   14008 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:48.678251   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-409200
	I0429 13:10:48.678251   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.678251   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.679253   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.686258   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:10:48.686806   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.686806   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.686806   14008 round_trippers.go:580]     Audit-Id: 84051d4b-0338-46e3-9ed4-8858dd2633f1
	I0429 13:10:48.686806   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.686806   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.686806   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.686806   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.686806   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-409200","namespace":"kube-system","uid":"bc75101f-63f2-4b41-a912-4d015c4fd4aa","resourceVersion":"1880","creationTimestamp":"2024-04-29T12:44:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.mirror":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.seen":"2024-04-29T12:44:32.885750739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0429 13:10:48.687652   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:48.687715   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.687715   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.687715   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.690402   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:10:48.690979   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.691031   14008 round_trippers.go:580]     Audit-Id: 00d411dd-e57f-4e1c-a643-9de858e65797
	I0429 13:10:48.691031   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.691031   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.691031   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.691031   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.691031   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.692635   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:48.693331   14008 pod_ready.go:97] node "multinode-409200" hosting pod "kube-controller-manager-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.693331   14008 pod_ready.go:81] duration metric: took 15.0801ms for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:48.693395   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200" hosting pod "kube-controller-manager-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:48.693395   14008 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bbxqg" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:48.829030   14008 request.go:629] Waited for 135.6337ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbxqg
	I0429 13:10:48.829349   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbxqg
	I0429 13:10:48.829421   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:48.829421   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:48.829421   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:48.833267   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:48.833267   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:48.833585   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:48 GMT
	I0429 13:10:48.833585   14008 round_trippers.go:580]     Audit-Id: 5edf7f33-c1e1-4bc5-8593-ff7952c710ec
	I0429 13:10:48.833585   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:48.833585   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:48.833585   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:48.833585   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:48.833962   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bbxqg","generateName":"kube-proxy-","namespace":"kube-system","uid":"3c4f811c-336b-4038-b6ff-d62efffacd9b","resourceVersion":"1429","creationTimestamp":"2024-04-29T12:52:37Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0429 13:10:49.020427   14008 request.go:629] Waited for 185.624ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m03
	I0429 13:10:49.020669   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m03
	I0429 13:10:49.020744   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:49.020744   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:49.020809   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:49.025435   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:49.025435   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:49.025435   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:49.025435   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:49.025435   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:49 GMT
	I0429 13:10:49.025435   14008 round_trippers.go:580]     Audit-Id: 00bb0595-0536-41ca-9e19-5898faa60fb5
	I0429 13:10:49.025435   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:49.025435   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:49.027444   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m03","uid":"d4d7c143-2c53-4eb2-9323-5c1ee0d251ea","resourceVersion":"1438","creationTimestamp":"2024-04-29T12:52:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_52_38_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4412 chars]
	I0429 13:10:49.027444   14008 pod_ready.go:97] node "multinode-409200-m03" hosting pod "kube-proxy-bbxqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200-m03" has status "Ready":"Unknown"
	I0429 13:10:49.027444   14008 pod_ready.go:81] duration metric: took 334.0466ms for pod "kube-proxy-bbxqg" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:49.027988   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200-m03" hosting pod "kube-proxy-bbxqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200-m03" has status "Ready":"Unknown"
	I0429 13:10:49.027988   14008 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:49.220324   14008 request.go:629] Waited for 192.2313ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 13:10:49.220542   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 13:10:49.220542   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:49.220542   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:49.220542   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:49.227711   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:10:49.227711   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:49.227711   14008 round_trippers.go:580]     Audit-Id: 8d483a1a-9ff5-4e63-9b3c-45793ff78cba
	I0429 13:10:49.227711   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:49.227711   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:49.227711   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:49.227711   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:49.227711   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:49 GMT
	I0429 13:10:49.227711   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g2jp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"d2c926f8-0701-483c-84ae-295e7bb08fc9","resourceVersion":"1916","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0429 13:10:49.426771   14008 request.go:629] Waited for 197.7004ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:49.426852   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:49.426852   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:49.426852   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:49.426852   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:49.431517   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:49.431584   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:49.431584   14008 round_trippers.go:580]     Audit-Id: 4d99da3d-fd99-4427-9605-0f4236d4fd28
	I0429 13:10:49.431584   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:49.431584   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:49.431584   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:49.431584   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:49.431584   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:49 GMT
	I0429 13:10:49.431871   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:49.432562   14008 pod_ready.go:97] node "multinode-409200" hosting pod "kube-proxy-g2jp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:49.432562   14008 pod_ready.go:81] duration metric: took 404.5254ms for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:49.432562   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200" hosting pod "kube-proxy-g2jp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:49.432562   14008 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lwc65" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:49.632744   14008 request.go:629] Waited for 200.18ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwc65
	I0429 13:10:49.633072   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwc65
	I0429 13:10:49.633072   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:49.633072   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:49.633072   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:49.639443   14008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 13:10:49.639905   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:49.639905   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:49 GMT
	I0429 13:10:49.639905   14008 round_trippers.go:580]     Audit-Id: 72de9fa0-738e-447d-a427-6e703b29e0ff
	I0429 13:10:49.639905   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:49.639905   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:49.639905   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:49.639905   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:49.640257   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lwc65","generateName":"kube-proxy-","namespace":"kube-system","uid":"98e18062-2d8f-45d3-a8fa-dda098365db8","resourceVersion":"606","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0429 13:10:49.835337   14008 request.go:629] Waited for 194.372ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m02
	I0429 13:10:49.835561   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m02
	I0429 13:10:49.835561   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:49.835832   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:49.835832   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:49.838221   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:10:49.838221   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:49.838221   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:49.838221   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:49.838221   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:49.839146   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:49.839146   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:49 GMT
	I0429 13:10:49.839146   14008 round_trippers.go:580]     Audit-Id: 31a10e42-3e10-4d66-9f21-bff51f21e720
	I0429 13:10:49.840124   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"1622","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0429 13:10:49.840638   14008 pod_ready.go:92] pod "kube-proxy-lwc65" in "kube-system" namespace has status "Ready":"True"
	I0429 13:10:49.840638   14008 pod_ready.go:81] duration metric: took 408.0728ms for pod "kube-proxy-lwc65" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:49.840698   14008 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:10:50.021678   14008 request.go:629] Waited for 180.7017ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 13:10:50.021712   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 13:10:50.021712   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:50.021712   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:50.021712   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:50.026413   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:50.026413   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:50.026413   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:50.026413   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:50.026413   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:50.026413   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:50 GMT
	I0429 13:10:50.026413   14008 round_trippers.go:580]     Audit-Id: ebde3580-e7a5-4806-ac10-44c83996ef61
	I0429 13:10:50.026413   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:50.026413   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-409200","namespace":"kube-system","uid":"6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266","resourceVersion":"1888","creationTimestamp":"2024-04-29T12:44:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.mirror":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.seen":"2024-04-29T12:44:24.392867685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5444 chars]
	I0429 13:10:50.223857   14008 request.go:629] Waited for 196.2954ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:50.223926   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:50.223926   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:50.224012   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:50.224012   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:50.230703   14008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 13:10:50.230703   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:50.230703   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:50.230703   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:50.230703   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:50 GMT
	I0429 13:10:50.230703   14008 round_trippers.go:580]     Audit-Id: 62b1a773-0f5a-40e2-bf88-8b85bd45512d
	I0429 13:10:50.230703   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:50.230703   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:50.232527   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:50.233182   14008 pod_ready.go:97] node "multinode-409200" hosting pod "kube-scheduler-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:50.233280   14008 pod_ready.go:81] duration metric: took 392.5789ms for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	E0429 13:10:50.233375   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200" hosting pod "kube-scheduler-multinode-409200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200" has status "Ready":"False"
	I0429 13:10:50.233375   14008 pod_ready.go:38] duration metric: took 1.6029421s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:10:50.233436   14008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 13:10:50.259196   14008 command_runner.go:130] > -16
	I0429 13:10:50.259196   14008 ops.go:34] apiserver oom_adj: -16
	I0429 13:10:50.259196   14008 kubeadm.go:591] duration metric: took 15.1579109s to restartPrimaryControlPlane
	I0429 13:10:50.259196   14008 kubeadm.go:393] duration metric: took 15.2334532s to StartCluster
	I0429 13:10:50.259196   14008 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:10:50.259196   14008 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 13:10:50.261185   14008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 13:10:50.262954   14008 start.go:234] Will wait 6m0s for node &{Name: IP:172.26.179.21 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 13:10:50.262954   14008 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 13:10:50.270419   14008 out.go:177] * Verifying Kubernetes components...
	I0429 13:10:50.263231   14008 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:10:50.274484   14008 out.go:177] * Enabled addons: 
	I0429 13:10:50.276376   14008 addons.go:505] duration metric: took 13.4222ms for enable addons: enabled=[]
	I0429 13:10:50.286439   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 13:10:50.641173   14008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 13:10:50.673274   14008 node_ready.go:35] waiting up to 6m0s for node "multinode-409200" to be "Ready" ...
	I0429 13:10:50.673439   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:50.673439   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:50.673439   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:50.673439   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:50.678103   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:50.678103   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:50.678103   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:50.678103   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:50 GMT
	I0429 13:10:50.678103   14008 round_trippers.go:580]     Audit-Id: 699344d7-dc50-443f-8bb8-c3f244cdd007
	I0429 13:10:50.678953   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:50.678953   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:50.678953   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:50.679065   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:51.187040   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:51.187040   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:51.187040   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:51.187040   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:51.193994   14008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 13:10:51.193994   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:51.193994   14008 round_trippers.go:580]     Audit-Id: b3ae6189-8206-4fc9-b2a8-0715179385e7
	I0429 13:10:51.193994   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:51.193994   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:51.193994   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:51.193994   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:51.193994   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:51 GMT
	I0429 13:10:51.194837   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:51.686449   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:51.686449   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:51.686449   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:51.686449   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:51.689860   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:51.690684   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:51.690684   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:51.690684   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:51.690684   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:51.690684   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:51 GMT
	I0429 13:10:51.690684   14008 round_trippers.go:580]     Audit-Id: 65887080-e0b5-4f49-b7c2-d4b66d35bdd2
	I0429 13:10:51.690684   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:51.690851   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:52.185891   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:52.186016   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:52.186016   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:52.186016   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:52.190578   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:52.190578   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:52.190578   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:52.191399   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:52.191399   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:52.191399   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:52 GMT
	I0429 13:10:52.191399   14008 round_trippers.go:580]     Audit-Id: b7cd4ee3-6e22-4c54-afc7-9c1f7ac94664
	I0429 13:10:52.191399   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:52.191894   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:52.689207   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:52.689207   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:52.689301   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:52.689301   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:52.693631   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:52.694114   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:52.694114   14008 round_trippers.go:580]     Audit-Id: 62071d05-220a-4ad3-9ab3-86d77884b456
	I0429 13:10:52.694114   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:52.694114   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:52.694114   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:52.694114   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:52.694114   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:52 GMT
	I0429 13:10:52.694316   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:52.695025   14008 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 13:10:53.173921   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:53.174073   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:53.174073   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:53.174073   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:53.178651   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:53.178832   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:53.178832   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:53.178832   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:53.178832   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:53.178832   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:53.178832   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:53 GMT
	I0429 13:10:53.178832   14008 round_trippers.go:580]     Audit-Id: d17eb130-dc4d-4ee8-9c1a-dc515c698603
	I0429 13:10:53.180046   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:53.684582   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:53.684582   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:53.684582   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:53.684582   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:53.687882   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:53.687882   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:53.687882   14008 round_trippers.go:580]     Audit-Id: 8b348bde-6af0-403f-98c6-8d85b64cd648
	I0429 13:10:53.687882   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:53.687882   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:53.687882   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:53.687882   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:53.687882   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:53 GMT
	I0429 13:10:53.689155   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:54.185299   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:54.185299   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:54.185299   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:54.185299   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:54.188998   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:54.188998   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:54.188998   14008 round_trippers.go:580]     Audit-Id: c7d09bd5-5fcf-467e-bfb9-8679a52f5c5d
	I0429 13:10:54.188998   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:54.188998   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:54.188998   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:54.188998   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:54.188998   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:54 GMT
	I0429 13:10:54.190048   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:54.673891   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:54.673891   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:54.673891   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:54.673891   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:54.678054   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:54.678054   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:54.678054   14008 round_trippers.go:580]     Audit-Id: 38ce8517-e351-4205-b5d2-66c694638301
	I0429 13:10:54.678054   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:54.678054   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:54.678054   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:54.678054   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:54.678054   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:54 GMT
	I0429 13:10:54.678566   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:55.174735   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:55.174735   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:55.175005   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:55.175005   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:55.178802   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:55.178802   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:55.178802   14008 round_trippers.go:580]     Audit-Id: 0d29a364-3088-4e71-a6bc-35ba4f50f0b3
	I0429 13:10:55.178802   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:55.178802   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:55.178802   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:55.179283   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:55.179283   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:55 GMT
	I0429 13:10:55.179336   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:55.180509   14008 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 13:10:55.686822   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:55.686822   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:55.686936   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:55.686936   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:55.690872   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:55.690872   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:55.690872   14008 round_trippers.go:580]     Audit-Id: de561c4f-b2f7-4c84-8436-86c5d9aad6a6
	I0429 13:10:55.691876   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:55.691900   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:55.691900   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:55.691900   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:55.691900   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:55 GMT
	I0429 13:10:55.692067   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:56.186057   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:56.186057   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:56.186057   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:56.186057   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:56.194215   14008 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 13:10:56.194215   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:56.194215   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:56.194215   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:56 GMT
	I0429 13:10:56.194215   14008 round_trippers.go:580]     Audit-Id: 0b05603a-fd3b-47aa-b9dd-d6d9ba401b0e
	I0429 13:10:56.194215   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:56.194215   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:56.194215   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:56.194215   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:56.685941   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:56.686159   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:56.686159   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:56.686159   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:56.689787   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:56.689787   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:56.689787   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:56.689787   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:56.690176   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:56.690176   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:56.690176   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:56 GMT
	I0429 13:10:56.690176   14008 round_trippers.go:580]     Audit-Id: 1608da55-4b88-4ddc-a1ed-6f303537ac49
	I0429 13:10:56.690734   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:57.175684   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:57.175684   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:57.175684   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:57.175684   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:57.179875   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:57.179875   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:57.179875   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:57.179875   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:57 GMT
	I0429 13:10:57.179875   14008 round_trippers.go:580]     Audit-Id: 5fd5d2d3-c9cd-42b0-9006-6bf7d8e9c720
	I0429 13:10:57.179875   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:57.179875   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:57.179875   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:57.180133   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:57.180645   14008 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 13:10:57.681468   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:57.681647   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:57.681714   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:57.681714   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:57.685186   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:57.685360   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:57.685360   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:57.685360   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:57.685360   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:57 GMT
	I0429 13:10:57.685360   14008 round_trippers.go:580]     Audit-Id: c881d06d-0a1c-46c0-8003-3a221c9d55b7
	I0429 13:10:57.685360   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:57.685360   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:57.685557   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1837","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0429 13:10:58.174238   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:58.174238   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:58.174238   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:58.174628   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:58.186500   14008 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 13:10:58.187071   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:58.187071   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:58.187071   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:58 GMT
	I0429 13:10:58.187071   14008 round_trippers.go:580]     Audit-Id: 99912078-5db9-4af2-9574-35a6183f2914
	I0429 13:10:58.187071   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:58.187071   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:58.187071   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:58.187498   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:10:58.675893   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:58.675893   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:58.675893   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:58.675893   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:58.679487   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:10:58.679487   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:58.679487   14008 round_trippers.go:580]     Audit-Id: 3e485fdf-bf18-4cf9-8b68-f7d20ec2614e
	I0429 13:10:58.680468   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:58.680468   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:58.680491   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:58.680491   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:58.680491   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:58 GMT
	I0429 13:10:58.680696   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:10:59.180708   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:59.180708   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:59.180708   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:59.180708   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:59.185281   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:10:59.185281   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:59.185281   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:59 GMT
	I0429 13:10:59.185741   14008 round_trippers.go:580]     Audit-Id: 7d71d40f-2b4a-4064-bf22-b1cfd6f58661
	I0429 13:10:59.185741   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:59.185741   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:59.185741   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:59.185741   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:59.185741   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:10:59.185741   14008 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 13:10:59.679696   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:10:59.679696   14008 round_trippers.go:469] Request Headers:
	I0429 13:10:59.679696   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:10:59.679696   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:10:59.684814   14008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 13:10:59.684814   14008 round_trippers.go:577] Response Headers:
	I0429 13:10:59.684814   14008 round_trippers.go:580]     Audit-Id: ecbde693-8c03-4c50-b63f-09e81ec97c94
	I0429 13:10:59.684814   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:10:59.684814   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:10:59.684814   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:10:59.684814   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:10:59.684814   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:10:59 GMT
	I0429 13:10:59.685414   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:00.183408   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:00.183408   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:00.183408   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:00.183408   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:00.187970   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:00.188320   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:00.188320   14008 round_trippers.go:580]     Audit-Id: 52537434-1493-4cc8-a7c1-e21ffa705563
	I0429 13:11:00.188320   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:00.188320   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:00.188320   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:00.188320   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:00.188320   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:00 GMT
	I0429 13:11:00.188320   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:00.685922   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:00.685922   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:00.685922   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:00.685922   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:00.692353   14008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 13:11:00.692419   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:00.692419   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:00.692419   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:00.692419   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:00.692419   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:00 GMT
	I0429 13:11:00.692419   14008 round_trippers.go:580]     Audit-Id: 65f84017-4148-4249-982e-e140f7c5963a
	I0429 13:11:00.692419   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:00.693988   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:01.184436   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:01.184436   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:01.184436   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:01.184436   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:01.189096   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:01.189529   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:01.189529   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:01 GMT
	I0429 13:11:01.189529   14008 round_trippers.go:580]     Audit-Id: 9854f80d-d87b-43dc-86d3-0fd83aaa798d
	I0429 13:11:01.189529   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:01.189529   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:01.189529   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:01.189529   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:01.189805   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:01.190096   14008 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 13:11:01.685925   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:01.686069   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:01.686069   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:01.686069   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:01.690690   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:01.690690   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:01.690690   14008 round_trippers.go:580]     Audit-Id: 393e4a1b-e82d-44a1-88dc-e2fc729a0692
	I0429 13:11:01.690690   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:01.690690   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:01.690690   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:01.690690   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:01.690690   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:01 GMT
	I0429 13:11:01.691329   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:02.176381   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:02.176440   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:02.176440   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:02.176440   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:02.179776   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:02.180110   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:02.180110   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:02.180110   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:02.180177   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:02.180177   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:02.180177   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:02 GMT
	I0429 13:11:02.180177   14008 round_trippers.go:580]     Audit-Id: 8fcd6c0e-879b-4656-ae86-534dbbad60cd
	I0429 13:11:02.180545   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:02.685257   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:02.685317   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:02.685377   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:02.685377   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:02.688791   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:02.688791   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:02.689379   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:02.689379   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:02.689379   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:02.689379   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:02.689379   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:02 GMT
	I0429 13:11:02.689379   14008 round_trippers.go:580]     Audit-Id: 70acdfda-063c-41f2-a921-71007eba8c2f
	I0429 13:11:02.689576   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:03.178489   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:03.178489   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:03.178489   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:03.178582   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:03.181100   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:11:03.181100   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:03.181100   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:03.182150   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:03 GMT
	I0429 13:11:03.182150   14008 round_trippers.go:580]     Audit-Id: e98aa19b-a8e4-4846-ae40-8199e4df5111
	I0429 13:11:03.182150   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:03.182150   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:03.182214   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:03.182631   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:03.678747   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:03.678968   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:03.678968   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:03.678968   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:03.681823   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:11:03.682818   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:03.682865   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:03.682865   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:03.682865   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:03 GMT
	I0429 13:11:03.682865   14008 round_trippers.go:580]     Audit-Id: fc371a51-6008-4662-b5b1-fe3f0b7151d1
	I0429 13:11:03.682865   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:03.682865   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:03.683189   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:03.684302   14008 node_ready.go:53] node "multinode-409200" has status "Ready":"False"
	I0429 13:11:04.187980   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:04.187980   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:04.187980   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:04.187980   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:04.192558   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:04.192558   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:04.192558   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:04.193375   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:04.193375   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:04.193375   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:04.193375   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:04 GMT
	I0429 13:11:04.193375   14008 round_trippers.go:580]     Audit-Id: 42accb44-3a3e-4a4d-b6de-7a2d1b947189
	I0429 13:11:04.193449   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:04.687973   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:04.687973   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:04.687973   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:04.687973   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:04.691528   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:04.691528   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:04.691528   14008 round_trippers.go:580]     Audit-Id: abd7d680-cd3e-4f06-9d00-991f9c31fedc
	I0429 13:11:04.691528   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:04.691528   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:04.691528   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:04.691528   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:04.691528   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:04 GMT
	I0429 13:11:04.692327   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:05.173907   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:05.173986   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.173986   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.173986   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.181062   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:11:05.181134   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.181134   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.181134   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.181134   14008 round_trippers.go:580]     Audit-Id: 560eaec6-f5b1-4687-b8b7-9b642ee3a93d
	I0429 13:11:05.181202   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.181202   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.181202   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.181449   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1946","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0429 13:11:05.675709   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:05.675709   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.675709   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.675709   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.680276   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:05.680663   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.680663   14008 round_trippers.go:580]     Audit-Id: e72edc66-d778-4ce8-8de1-9b91fe2614b1
	I0429 13:11:05.680663   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.680663   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.680663   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.680663   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.680663   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.680872   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:05.681252   14008 node_ready.go:49] node "multinode-409200" has status "Ready":"True"
	I0429 13:11:05.681252   14008 node_ready.go:38] duration metric: took 15.0077603s for node "multinode-409200" to be "Ready" ...
	I0429 13:11:05.681252   14008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:11:05.681252   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:11:05.681252   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.681252   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.681252   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.691235   14008 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 13:11:05.691235   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.691235   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.691235   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.691235   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.691235   14008 round_trippers.go:580]     Audit-Id: 2b065644-9acf-46ae-913a-2603d5ced794
	I0429 13:11:05.691235   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.691235   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.692613   14008 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1978"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1967","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86100 chars]
	I0429 13:11:05.697408   14008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.697547   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ctb8n
	I0429 13:11:05.697605   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.697605   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.697605   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.701372   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:05.701372   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.701372   14008 round_trippers.go:580]     Audit-Id: 2e269bf8-1e53-4328-8bca-1e372edfddb3
	I0429 13:11:05.701372   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.701372   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.701372   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.701372   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.701372   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.701372   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1967","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6788 chars]
	I0429 13:11:05.702142   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:05.702217   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.702217   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.702217   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.704972   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:11:05.704972   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.704972   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.705629   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.705629   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.705629   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.705629   14008 round_trippers.go:580]     Audit-Id: 6d470ec0-c21d-4cde-9493-159632c5149e
	I0429 13:11:05.705629   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.705970   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:05.706845   14008 pod_ready.go:92] pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:05.706908   14008 pod_ready.go:81] duration metric: took 9.3693ms for pod "coredns-7db6d8ff4d-ctb8n" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.706908   14008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.706967   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-409200
	I0429 13:11:05.706967   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.707056   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.707056   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.710092   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:05.710092   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.710236   14008 round_trippers.go:580]     Audit-Id: 084586b1-d566-4a6c-9105-c97f15185847
	I0429 13:11:05.710236   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.710236   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.710236   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.710236   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.710236   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.710471   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-409200","namespace":"kube-system","uid":"b9b6b993-c1c6-46c3-8d07-0a639619f279","resourceVersion":"1952","creationTimestamp":"2024-04-29T13:10:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.179.21:2379","kubernetes.io/config.hash":"e52a2c55f8d70a755b3b61d5b714d564","kubernetes.io/config.mirror":"e52a2c55f8d70a755b3b61d5b714d564","kubernetes.io/config.seen":"2024-04-29T13:10:38.679846779Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T13:10:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6160 chars]
	I0429 13:11:05.710554   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:05.710554   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.710554   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.710554   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.715220   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:05.715220   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.715220   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.715220   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.715220   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.715220   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.715220   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.715220   14008 round_trippers.go:580]     Audit-Id: 644262d3-c088-426b-9f3a-615b950790dd
	I0429 13:11:05.715907   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:05.715907   14008 pod_ready.go:92] pod "etcd-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:05.715907   14008 pod_ready.go:81] duration metric: took 8.9992ms for pod "etcd-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.715907   14008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.715907   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-409200
	I0429 13:11:05.715907   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.715907   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.715907   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.720114   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:05.720188   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.720188   14008 round_trippers.go:580]     Audit-Id: 1b5dd591-0a57-4c1d-bd36-71812be7721f
	I0429 13:11:05.720188   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.720188   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.720188   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.720188   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.720188   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.720485   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-409200","namespace":"kube-system","uid":"6b6a5200-5ddb-4315-be16-b0d86d36820f","resourceVersion":"1954","creationTimestamp":"2024-04-29T13:10:45Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.179.21:8443","kubernetes.io/config.hash":"67a711354a194289dea1aee475e07833","kubernetes.io/config.mirror":"67a711354a194289dea1aee475e07833","kubernetes.io/config.seen":"2024-04-29T13:10:38.602845937Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T13:10:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7695 chars]
	I0429 13:11:05.721347   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:05.721347   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.721347   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.721347   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.724243   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:11:05.724243   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.724243   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.725182   14008 round_trippers.go:580]     Audit-Id: e68c256a-c95b-46e8-8a7e-0503156e865b
	I0429 13:11:05.725182   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.725182   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.725182   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.725182   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.726030   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:05.726728   14008 pod_ready.go:92] pod "kube-apiserver-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:05.726728   14008 pod_ready.go:81] duration metric: took 10.8211ms for pod "kube-apiserver-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.726728   14008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.726728   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-409200
	I0429 13:11:05.726728   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.726728   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.726728   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.730308   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:05.730308   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.730308   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.730308   14008 round_trippers.go:580]     Audit-Id: 6dee5fdc-0be7-437c-9b4c-ee1c4d738f18
	I0429 13:11:05.730308   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.730308   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.730308   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.730308   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.731100   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-409200","namespace":"kube-system","uid":"bc75101f-63f2-4b41-a912-4d015c4fd4aa","resourceVersion":"1935","creationTimestamp":"2024-04-29T12:44:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.mirror":"4cf53221646bb55509cc5a45851d372b","kubernetes.io/config.seen":"2024-04-29T12:44:32.885750739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0429 13:11:05.731664   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:05.731731   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.731731   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.731731   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.734372   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:11:05.734372   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.734372   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.734372   14008 round_trippers.go:580]     Audit-Id: fe3df153-5467-464c-a3b5-3b8365511d0d
	I0429 13:11:05.734372   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.734372   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.734372   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.734372   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.735222   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:05.735672   14008 pod_ready.go:92] pod "kube-controller-manager-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:05.735793   14008 pod_ready.go:81] duration metric: took 9.0643ms for pod "kube-controller-manager-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.735793   14008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bbxqg" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:05.879512   14008 request.go:629] Waited for 143.7185ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbxqg
	I0429 13:11:05.879671   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbxqg
	I0429 13:11:05.879671   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:05.879671   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:05.879671   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:05.883626   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:05.884628   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:05.885018   14008 round_trippers.go:580]     Audit-Id: c1e48354-ef13-4b35-97a5-dbf41ae2d8b3
	I0429 13:11:05.885135   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:05.885135   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:05.885135   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:05.885135   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:05.885230   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:05 GMT
	I0429 13:11:05.886475   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bbxqg","generateName":"kube-proxy-","namespace":"kube-system","uid":"3c4f811c-336b-4038-b6ff-d62efffacd9b","resourceVersion":"1429","creationTimestamp":"2024-04-29T12:52:37Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0429 13:11:06.084506   14008 request.go:629] Waited for 196.9071ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m03
	I0429 13:11:06.084681   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m03
	I0429 13:11:06.084681   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:06.084681   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:06.084681   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:06.090381   14008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 13:11:06.090651   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:06.090651   14008 round_trippers.go:580]     Audit-Id: 4fab6aff-c696-41a8-9796-c573426356ad
	I0429 13:11:06.090717   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:06.090717   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:06.090717   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:06.090717   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:06.090717   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:06 GMT
	I0429 13:11:06.090996   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m03","uid":"d4d7c143-2c53-4eb2-9323-5c1ee0d251ea","resourceVersion":"1943","creationTimestamp":"2024-04-29T12:52:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_52_38_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:52:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4315 chars]
	I0429 13:11:06.091527   14008 pod_ready.go:97] node "multinode-409200-m03" hosting pod "kube-proxy-bbxqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200-m03" has status "Ready":"Unknown"
	I0429 13:11:06.091651   14008 pod_ready.go:81] duration metric: took 355.8558ms for pod "kube-proxy-bbxqg" in "kube-system" namespace to be "Ready" ...
	E0429 13:11:06.091651   14008 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-409200-m03" hosting pod "kube-proxy-bbxqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-409200-m03" has status "Ready":"Unknown"
	I0429 13:11:06.091651   14008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:06.290939   14008 request.go:629] Waited for 199.0777ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 13:11:06.291211   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g2jp8
	I0429 13:11:06.291274   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:06.291274   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:06.291274   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:06.298143   14008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 13:11:06.298143   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:06.298143   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:06.298143   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:06.298143   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:06 GMT
	I0429 13:11:06.298143   14008 round_trippers.go:580]     Audit-Id: 747a95d3-c072-463e-9db2-88d7e12ed5ca
	I0429 13:11:06.298143   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:06.298143   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:06.298920   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g2jp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"d2c926f8-0701-483c-84ae-295e7bb08fc9","resourceVersion":"1916","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0429 13:11:06.491212   14008 request.go:629] Waited for 192.0395ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:06.491669   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:06.491669   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:06.491731   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:06.491731   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:06.494958   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:06.494958   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:06.494958   14008 round_trippers.go:580]     Audit-Id: ef6f83d6-1d6b-4e54-9cb4-d33a6f354d5c
	I0429 13:11:06.494958   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:06.494958   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:06.494958   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:06.494958   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:06.494958   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:06 GMT
	I0429 13:11:06.498383   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:06.498977   14008 pod_ready.go:92] pod "kube-proxy-g2jp8" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:06.499080   14008 pod_ready.go:81] duration metric: took 407.4257ms for pod "kube-proxy-g2jp8" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:06.499080   14008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lwc65" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:06.678596   14008 request.go:629] Waited for 179.3821ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwc65
	I0429 13:11:06.678847   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwc65
	I0429 13:11:06.678847   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:06.678847   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:06.678847   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:06.684227   14008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 13:11:06.684227   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:06.684227   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:06 GMT
	I0429 13:11:06.684227   14008 round_trippers.go:580]     Audit-Id: 0adc8eb6-9903-4d4f-9b24-7879b44914a1
	I0429 13:11:06.684227   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:06.684227   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:06.684227   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:06.684663   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:06.684795   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lwc65","generateName":"kube-proxy-","namespace":"kube-system","uid":"98e18062-2d8f-45d3-a8fa-dda098365db8","resourceVersion":"606","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"472e3591-709a-44cd-8355-5d77eb32f5b8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"472e3591-709a-44cd-8355-5d77eb32f5b8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0429 13:11:06.882781   14008 request.go:629] Waited for 197.6295ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m02
	I0429 13:11:06.883033   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200-m02
	I0429 13:11:06.883033   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:06.883033   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:06.883033   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:06.886819   14008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 13:11:06.886819   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:06.886819   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:06.886819   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:06 GMT
	I0429 13:11:06.886819   14008 round_trippers.go:580]     Audit-Id: c6b740c7-31f1-429b-ba96-c3e365079573
	I0429 13:11:06.886819   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:06.886819   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:06.886819   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:06.887828   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200-m02","uid":"47358e8f-1b64-4611-be52-260f265b490a","resourceVersion":"1622","creationTimestamp":"2024-04-29T12:47:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_29T12_47_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:47:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0429 13:11:06.887828   14008 pod_ready.go:92] pod "kube-proxy-lwc65" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:06.887828   14008 pod_ready.go:81] duration metric: took 388.7453ms for pod "kube-proxy-lwc65" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:06.887828   14008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:07.086917   14008 request.go:629] Waited for 198.8928ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 13:11:07.087113   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-409200
	I0429 13:11:07.087113   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:07.087113   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:07.087304   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:07.093037   14008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 13:11:07.093037   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:07.093037   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:07 GMT
	I0429 13:11:07.093037   14008 round_trippers.go:580]     Audit-Id: 1e2b60d1-6818-4c6b-b33c-5d9514c5c89d
	I0429 13:11:07.093037   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:07.093037   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:07.093037   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:07.093037   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:07.093331   14008 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-409200","namespace":"kube-system","uid":"6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266","resourceVersion":"1934","creationTimestamp":"2024-04-29T12:44:32Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.mirror":"7889bd5d99ec8e82efea90239c7d5ee9","kubernetes.io/config.seen":"2024-04-29T12:44:24.392867685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0429 13:11:07.290286   14008 request.go:629] Waited for 196.5112ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:07.290286   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes/multinode-409200
	I0429 13:11:07.290286   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:07.290286   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:07.290286   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:07.294919   14008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 13:11:07.294919   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:07.294919   14008 round_trippers.go:580]     Audit-Id: 2c7a4793-9967-4e0f-aae9-77addb3ebd01
	I0429 13:11:07.294919   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:07.294919   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:07.294919   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:07.294919   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:07.294919   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:07 GMT
	I0429 13:11:07.295888   14008 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1978","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-29T12:44:29Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0429 13:11:07.296497   14008 pod_ready.go:92] pod "kube-scheduler-multinode-409200" in "kube-system" namespace has status "Ready":"True"
	I0429 13:11:07.296608   14008 pod_ready.go:81] duration metric: took 408.7766ms for pod "kube-scheduler-multinode-409200" in "kube-system" namespace to be "Ready" ...
	I0429 13:11:07.296628   14008 pod_ready.go:38] duration metric: took 1.6153642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 13:11:07.296665   14008 api_server.go:52] waiting for apiserver process to appear ...
	I0429 13:11:07.312809   14008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:11:07.345695   14008 command_runner.go:130] > 1888
	I0429 13:11:07.346302   14008 api_server.go:72] duration metric: took 17.0832182s to wait for apiserver process to appear ...
	I0429 13:11:07.346302   14008 api_server.go:88] waiting for apiserver healthz status ...
	I0429 13:11:07.346302   14008 api_server.go:253] Checking apiserver healthz at https://172.26.179.21:8443/healthz ...
	I0429 13:11:07.356740   14008 api_server.go:279] https://172.26.179.21:8443/healthz returned 200:
	ok
	I0429 13:11:07.357463   14008 round_trippers.go:463] GET https://172.26.179.21:8443/version
	I0429 13:11:07.357463   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:07.357463   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:07.357463   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:07.360052   14008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 13:11:07.360052   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:07.360052   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:07.360192   14008 round_trippers.go:580]     Content-Length: 263
	I0429 13:11:07.360192   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:07 GMT
	I0429 13:11:07.360192   14008 round_trippers.go:580]     Audit-Id: 3e362bae-65b1-4699-9423-6123b744af12
	I0429 13:11:07.360192   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:07.360192   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:07.360192   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:07.360192   14008 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0429 13:11:07.360352   14008 api_server.go:141] control plane version: v1.30.0
	I0429 13:11:07.360446   14008 api_server.go:131] duration metric: took 14.1446ms to wait for apiserver health ...
	I0429 13:11:07.360502   14008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 13:11:07.478195   14008 request.go:629] Waited for 117.4453ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:11:07.478195   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:11:07.478480   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:07.478480   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:07.478480   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:07.486841   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:11:07.486841   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:07.486841   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:07.486841   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:07.486841   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:07.486841   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:07.486841   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:07 GMT
	I0429 13:11:07.486841   14008 round_trippers.go:580]     Audit-Id: f6f584cb-eaf3-47d2-8ff1-a01a43753afd
	I0429 13:11:07.488927   14008 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1981"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1967","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86100 chars]
	I0429 13:11:07.493205   14008 system_pods.go:59] 12 kube-system pods found
	I0429 13:11:07.493205   14008 system_pods.go:61] "coredns-7db6d8ff4d-ctb8n" [1141a626-d4ac-4826-a912-7b7ed378b013] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "etcd-multinode-409200" [b9b6b993-c1c6-46c3-8d07-0a639619f279] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kindnet-7p265" [d6da7369-a131-4058-b9a2-4ee6e9ac8a4f] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kindnet-svw9w" [81d6ce68-e391-48d1-8246-3f7047ba52e2] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kindnet-xj48j" [adefd380-e946-47ff-b57c-3baa04e6f99c] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kube-apiserver-multinode-409200" [6b6a5200-5ddb-4315-be16-b0d86d36820f] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kube-controller-manager-multinode-409200" [bc75101f-63f2-4b41-a912-4d015c4fd4aa] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kube-proxy-bbxqg" [3c4f811c-336b-4038-b6ff-d62efffacd9b] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kube-proxy-g2jp8" [d2c926f8-0701-483c-84ae-295e7bb08fc9] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kube-proxy-lwc65" [98e18062-2d8f-45d3-a8fa-dda098365db8] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "kube-scheduler-multinode-409200" [6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266] Running
	I0429 13:11:07.493205   14008 system_pods.go:61] "storage-provisioner" [a200a31d-7fe5-4ebd-b4ea-f8ae593de3f9] Running
	I0429 13:11:07.493794   14008 system_pods.go:74] duration metric: took 132.7027ms to wait for pod list to return data ...
	I0429 13:11:07.493794   14008 default_sa.go:34] waiting for default service account to be created ...
	I0429 13:11:07.679506   14008 request.go:629] Waited for 185.4727ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/default/serviceaccounts
	I0429 13:11:07.679506   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/default/serviceaccounts
	I0429 13:11:07.679506   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:07.679506   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:07.679506   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:07.687301   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:11:07.687301   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:07.687301   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:07.687301   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:07.687301   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:07.687301   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:07.687301   14008 round_trippers.go:580]     Content-Length: 262
	I0429 13:11:07.687301   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:07 GMT
	I0429 13:11:07.687301   14008 round_trippers.go:580]     Audit-Id: 62d1e99d-c2f3-4609-b041-9dff5a486a55
	I0429 13:11:07.687301   14008 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1981"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1c200474-8705-40aa-8512-ec20a74a9ff0","resourceVersion":"323","creationTimestamp":"2024-04-29T12:44:46Z"}}]}
	I0429 13:11:07.687301   14008 default_sa.go:45] found service account: "default"
	I0429 13:11:07.687301   14008 default_sa.go:55] duration metric: took 193.5049ms for default service account to be created ...
	I0429 13:11:07.687301   14008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 13:11:07.885946   14008 request.go:629] Waited for 198.471ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:11:07.886152   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/namespaces/kube-system/pods
	I0429 13:11:07.886152   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:07.886152   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:07.886152   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:07.893359   14008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 13:11:07.893475   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:07.893475   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:07.893475   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:07 GMT
	I0429 13:11:07.893475   14008 round_trippers.go:580]     Audit-Id: a1dc6265-af27-4d09-964f-6572b9695aa1
	I0429 13:11:07.893475   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:07.893475   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:07.893475   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:07.895074   14008 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1981"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-ctb8n","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"1141a626-d4ac-4826-a912-7b7ed378b013","resourceVersion":"1967","creationTimestamp":"2024-04-29T12:44:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b06d4345-30e6-4270-b247-8af160d2fa5c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-29T12:44:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b06d4345-30e6-4270-b247-8af160d2fa5c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86100 chars]
	I0429 13:11:07.899419   14008 system_pods.go:86] 12 kube-system pods found
	I0429 13:11:07.899419   14008 system_pods.go:89] "coredns-7db6d8ff4d-ctb8n" [1141a626-d4ac-4826-a912-7b7ed378b013] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "etcd-multinode-409200" [b9b6b993-c1c6-46c3-8d07-0a639619f279] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kindnet-7p265" [d6da7369-a131-4058-b9a2-4ee6e9ac8a4f] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kindnet-svw9w" [81d6ce68-e391-48d1-8246-3f7047ba52e2] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kindnet-xj48j" [adefd380-e946-47ff-b57c-3baa04e6f99c] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kube-apiserver-multinode-409200" [6b6a5200-5ddb-4315-be16-b0d86d36820f] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kube-controller-manager-multinode-409200" [bc75101f-63f2-4b41-a912-4d015c4fd4aa] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kube-proxy-bbxqg" [3c4f811c-336b-4038-b6ff-d62efffacd9b] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kube-proxy-g2jp8" [d2c926f8-0701-483c-84ae-295e7bb08fc9] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kube-proxy-lwc65" [98e18062-2d8f-45d3-a8fa-dda098365db8] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "kube-scheduler-multinode-409200" [6c9490e4-fb0e-4e5b-a2ae-0f5096e8c266] Running
	I0429 13:11:07.899419   14008 system_pods.go:89] "storage-provisioner" [a200a31d-7fe5-4ebd-b4ea-f8ae593de3f9] Running
	I0429 13:11:07.899419   14008 system_pods.go:126] duration metric: took 212.1167ms to wait for k8s-apps to be running ...
	I0429 13:11:07.899419   14008 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 13:11:07.911383   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:11:07.952016   14008 system_svc.go:56] duration metric: took 52.5962ms WaitForService to wait for kubelet
	I0429 13:11:07.952070   14008 kubeadm.go:576] duration metric: took 17.6889824s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 13:11:07.952070   14008 node_conditions.go:102] verifying NodePressure condition ...
	I0429 13:11:08.089687   14008 request.go:629] Waited for 137.4332ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.179.21:8443/api/v1/nodes
	I0429 13:11:08.089825   14008 round_trippers.go:463] GET https://172.26.179.21:8443/api/v1/nodes
	I0429 13:11:08.089825   14008 round_trippers.go:469] Request Headers:
	I0429 13:11:08.089825   14008 round_trippers.go:473]     Accept: application/json, */*
	I0429 13:11:08.089884   14008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0429 13:11:08.099580   14008 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0429 13:11:08.099737   14008 round_trippers.go:577] Response Headers:
	I0429 13:11:08.099737   14008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 188f4c20-7c9b-422f-9e42-c0f0abc70a6c
	I0429 13:11:08.099737   14008 round_trippers.go:580]     Date: Mon, 29 Apr 2024 13:11:08 GMT
	I0429 13:11:08.099737   14008 round_trippers.go:580]     Audit-Id: 94fa5ce5-7333-489f-9172-24e1fe7734b6
	I0429 13:11:08.099809   14008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0429 13:11:08.099835   14008 round_trippers.go:580]     Content-Type: application/json
	I0429 13:11:08.099835   14008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4e09009f-5912-417a-9038-4d4b1ec118e6
	I0429 13:11:08.100486   14008 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1983"},"items":[{"metadata":{"name":"multinode-409200","uid":"fc01e0ed-0807-457d-bdad-dc2e471b22d0","resourceVersion":"1982","creationTimestamp":"2024-04-29T12:44:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-409200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8dbdf0965c47dd73171cfc89e1a9c75505f7a22d","minikube.k8s.io/name":"multinode-409200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_29T12_44_34_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15418 chars]
	I0429 13:11:08.101709   14008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:11:08.101783   14008 node_conditions.go:123] node cpu capacity is 2
	I0429 13:11:08.101783   14008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:11:08.101783   14008 node_conditions.go:123] node cpu capacity is 2
	I0429 13:11:08.101783   14008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 13:11:08.101783   14008 node_conditions.go:123] node cpu capacity is 2
	I0429 13:11:08.101783   14008 node_conditions.go:105] duration metric: took 149.7119ms to run NodePressure ...
	I0429 13:11:08.101783   14008 start.go:240] waiting for startup goroutines ...
	I0429 13:11:08.101783   14008 start.go:245] waiting for cluster config update ...
	I0429 13:11:08.101783   14008 start.go:254] writing updated cluster config ...
	I0429 13:11:08.106053   14008 out.go:177] 
	I0429 13:11:08.110091   14008 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:11:08.114231   14008 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:11:08.114231   14008 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 13:11:08.123452   14008 out.go:177] * Starting "multinode-409200-m02" worker node in "multinode-409200" cluster
	I0429 13:11:08.126895   14008 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 13:11:08.127240   14008 cache.go:56] Caching tarball of preloaded images
	I0429 13:11:08.128014   14008 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 13:11:08.128142   14008 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 13:11:08.128552   14008 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-409200\config.json ...
	I0429 13:11:08.131167   14008 start.go:360] acquireMachinesLock for multinode-409200-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 13:11:08.131167   14008 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-409200-m02"
	I0429 13:11:08.131167   14008 start.go:96] Skipping create...Using existing machine configuration
	I0429 13:11:08.131167   14008 fix.go:54] fixHost starting: m02
	I0429 13:11:08.131832   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:11:10.293329   14008 main.go:141] libmachine: [stdout =====>] : Off
	
	I0429 13:11:10.293877   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:11:10.293877   14008 fix.go:112] recreateIfNeeded on multinode-409200-m02: state=Stopped err=<nil>
	W0429 13:11:10.293877   14008 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 13:11:10.300732   14008 out.go:177] * Restarting existing hyperv VM for "multinode-409200-m02" ...
	I0429 13:11:10.303796   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-409200-m02
	I0429 13:11:13.467416   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:11:13.467416   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:11:13.467416   14008 main.go:141] libmachine: Waiting for host to start...
	I0429 13:11:13.467416   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:11:15.764453   14008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:11:15.764453   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:11:15.764646   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 13:11:18.398789   14008 main.go:141] libmachine: [stdout =====>] : 
	I0429 13:11:18.398789   14008 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:11:19.400754   14008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	
	
	==> Docker <==
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.030769202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.030939403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.032485412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.055292444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.055368445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.055387645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.055574046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 13:11:02 multinode-409200 cri-dockerd[1278]: time="2024-04-29T13:11:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c7d0712b1cd17b0d9d94294c409aed321b59fbbec9a481cf2c5aed966e0c27d8/resolv.conf as [nameserver 172.26.176.1]"
	Apr 29 13:11:02 multinode-409200 cri-dockerd[1278]: time="2024-04-29T13:11:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e7a1902651a2e9f97cbfe9cfcadf463f7dbfc82c9b1b4f9fc5bf1497c45cfc02/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.661605360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.661835961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.661904661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.662150963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.725093909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.725297610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.725502511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 13:11:02 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:02.726377116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 13:11:17 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:17.429193270Z" level=info msg="shim disconnected" id=8f7ea198e63de82d0eb8176a982c3e474bc13c86394c82293b19827180ef25e8 namespace=moby
	Apr 29 13:11:17 multinode-409200 dockerd[1052]: time="2024-04-29T13:11:17.429289969Z" level=info msg="ignoring event" container=8f7ea198e63de82d0eb8176a982c3e474bc13c86394c82293b19827180ef25e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 29 13:11:17 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:17.430864953Z" level=warning msg="cleaning up after shim disconnected" id=8f7ea198e63de82d0eb8176a982c3e474bc13c86394c82293b19827180ef25e8 namespace=moby
	Apr 29 13:11:17 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:17.431965642Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 29 13:11:29 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:29.980567974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 13:11:29 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:29.980656473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 13:11:29 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:29.980676773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 13:11:29 multinode-409200 dockerd[1058]: time="2024-04-29T13:11:29.981015570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	76b74b9e1c636       6e38f40d628db                                                                                         16 seconds ago       Running             storage-provisioner       2                   bdb23844f9770       storage-provisioner
	c6ac8a3b4acb8       8c811b4aec35f                                                                                         43 seconds ago       Running             busybox                   1                   e7a1902651a2e       busybox-fc5497c4f-gr44t
	2a7aefee46f72       cbb01a7bd410d                                                                                         43 seconds ago       Running             coredns                   1                   c7d0712b1cd17       coredns-7db6d8ff4d-ctb8n
	b63c557cdc84a       4950bb10b3f87                                                                                         58 seconds ago       Running             kindnet-cni               1                   afa1e20276b87       kindnet-xj48j
	8f7ea198e63de       6e38f40d628db                                                                                         59 seconds ago       Exited              storage-provisioner       1                   bdb23844f9770       storage-provisioner
	136039c02f783       a0bf559e280cf                                                                                         59 seconds ago       Running             kube-proxy                1                   87543de01680d       kube-proxy-g2jp8
	fb84617b76087       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   79f19581367bb       etcd-multinode-409200
	bb90fe6bf6c11       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   ab05fdf92d1fe       kube-apiserver-multinode-409200
	d074bfb341afd       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   5e1ef49b6609d       kube-scheduler-multinode-409200
	a70dddc97b188       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   4d093d4745729       kube-controller-manager-multinode-409200
	9a3d650be06c0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago       Exited              busybox                   0                   d3a063be2c6a2       busybox-fc5497c4f-gr44t
	98ab9c7d68851       cbb01a7bd410d                                                                                         26 minutes ago       Exited              coredns                   0                   ba73c7e4d62c2       coredns-7db6d8ff4d-ctb8n
	caeb8f4bcea15       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago       Exited              kindnet-cni               0                   3792c8bbb983d       kindnet-xj48j
	3ba8caba4bc56       a0bf559e280cf                                                                                         26 minutes ago       Exited              kube-proxy                0                   2d26cd85561dd       kube-proxy-g2jp8
	315326a1ce10c       259c8277fcbbc                                                                                         27 minutes ago       Exited              kube-scheduler            0                   c88537851c019       kube-scheduler-multinode-409200
	5adb6a9084e4b       c7aad43836fa5                                                                                         27 minutes ago       Exited              kube-controller-manager   0                   19fd9c3dddd43       kube-controller-manager-multinode-409200
	
	
	==> coredns [2a7aefee46f7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ad034cdec630ea896b94a48e8befd9caaf201b38d8a8007174c2232543e2c99f7633cb4df3d02156a6d84597982f74bb9dc874d19116cf29e0234336f9f204d8
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34502 - 1317 "HINFO IN 5420133666185898013.1614063667135350613. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.059800833s
	
	
	==> coredns [98ab9c7d6885] <==
	[INFO] 10.244.1.2:45305 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000112002s
	[INFO] 10.244.1.2:41116 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177102s
	[INFO] 10.244.1.2:57979 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158402s
	[INFO] 10.244.1.2:49615 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000059801s
	[INFO] 10.244.1.2:42034 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000564s
	[INFO] 10.244.1.2:59112 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133602s
	[INFO] 10.244.1.2:44817 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055401s
	[INFO] 10.244.0.3:47750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000202902s
	[INFO] 10.244.0.3:42610 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058701s
	[INFO] 10.244.0.3:48140 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094301s
	[INFO] 10.244.0.3:43769 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056701s
	[INFO] 10.244.1.2:35529 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000365104s
	[INFO] 10.244.1.2:35716 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000176402s
	[INFO] 10.244.1.2:54486 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129601s
	[INFO] 10.244.1.2:44351 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000646s
	[INFO] 10.244.0.3:53572 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267303s
	[INFO] 10.244.0.3:60447 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147901s
	[INFO] 10.244.0.3:49757 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147202s
	[INFO] 10.244.0.3:51305 - 5 "PTR IN 1.176.26.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081501s
	[INFO] 10.244.1.2:52861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175302s
	[INFO] 10.244.1.2:45137 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199102s
	[INFO] 10.244.1.2:32823 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000190002s
	[INFO] 10.244.1.2:41704 - 5 "PTR IN 1.176.26.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000061001s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-409200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-409200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=multinode-409200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T12_44_34_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-409200
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:11:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 13:11:05 +0000   Mon, 29 Apr 2024 12:44:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 13:11:05 +0000   Mon, 29 Apr 2024 12:44:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 13:11:05 +0000   Mon, 29 Apr 2024 12:44:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 13:11:05 +0000   Mon, 29 Apr 2024 13:11:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.179.21
	  Hostname:    multinode-409200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9eb309c79b34ce9a517457d85176e1e
	  System UUID:                560251d1-f442-3048-aa69-bfa1c5b44db2
	  Boot ID:                    19b06999-a8bd-4501-93f2-bdbd41ae99ef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gr44t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-ctb8n                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-multinode-409200                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         60s
	  kube-system                 kindnet-xj48j                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-multinode-409200             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-controller-manager-multinode-409200    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-g2jp8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-multinode-409200             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26m                kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     27m                kubelet          Node multinode-409200 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node multinode-409200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node multinode-409200 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26m                node-controller  Node multinode-409200 event: Registered Node multinode-409200 in Controller
	  Normal  NodeReady                26m                kubelet          Node multinode-409200 status is now: NodeReady
	  Normal  Starting                 67s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node multinode-409200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node multinode-409200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s (x7 over 67s)  kubelet          Node multinode-409200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  67s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           47s                node-controller  Node multinode-409200 event: Registered Node multinode-409200 in Controller
	
	
	Name:               multinode-409200-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-409200-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=multinode-409200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_47_49_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:47:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-409200-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 13:07:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 13:04:07 +0000   Mon, 29 Apr 2024 13:11:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 13:04:07 +0000   Mon, 29 Apr 2024 13:11:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 13:04:07 +0000   Mon, 29 Apr 2024 13:11:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 13:04:07 +0000   Mon, 29 Apr 2024 13:11:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.26.183.208
	  Hostname:    multinode-409200-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d58c45a85c440c597f0a96b30e84f09
	  System UUID:                8c823ba6-3970-cc46-8a8d-d45bb5bace8c
	  Boot ID:                    40b5e515-11a3-4198-b85e-669d356ae177
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xvm2v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-svw9w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-lwc65           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node multinode-409200-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node multinode-409200-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node multinode-409200-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node multinode-409200-m02 event: Registered Node multinode-409200-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-409200-m02 status is now: NodeReady
	  Normal  RegisteredNode           47s                node-controller  Node multinode-409200-m02 event: Registered Node multinode-409200-m02 in Controller
	  Normal  NodeNotReady             7s                 node-controller  Node multinode-409200-m02 status is now: NodeNotReady
	
	
	Name:               multinode-409200-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-409200-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=multinode-409200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_52_38_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:52:37 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-409200-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:59:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 12:58:13 +0000   Mon, 29 Apr 2024 13:00:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 12:58:13 +0000   Mon, 29 Apr 2024 13:00:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 12:58:13 +0000   Mon, 29 Apr 2024 13:00:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 12:58:13 +0000   Mon, 29 Apr 2024 13:00:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.26.183.1
	  Hostname:    multinode-409200-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fb1f09d6927404399a9e8da87cc3dea
	  System UUID:                4609bb56-f956-874e-bb10-b85027c7b67f
	  Boot ID:                    0af6b34b-d477-4688-94f5-fcd2f3452b10
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7p265       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-bbxqg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node multinode-409200-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node multinode-409200-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node multinode-409200-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node multinode-409200-m03 event: Registered Node multinode-409200-m03 in Controller
	  Normal  NodeReady                18m                kubelet          Node multinode-409200-m03 status is now: NodeReady
	  Normal  NodeNotReady             11m                node-controller  Node multinode-409200-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           47s                node-controller  Node multinode-409200-m03 event: Registered Node multinode-409200-m03 in Controller
	
	
	==> dmesg <==
	[  +6.063792] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.718997] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +2.311087] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[Apr29 13:09] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +51.960280] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.239772] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[Apr29 13:10] systemd-fstab-generator[978]: Ignoring "noauto" option for root device
	[  +0.129692] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.677568] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +0.232214] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	[  +0.281005] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +3.087940] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.229411] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	[  +0.225606] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	[  +0.313461] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +0.127299] kauditd_printk_skb: 183 callbacks suppressed
	[  +0.884982] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	[  +5.389524] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +0.120798] kauditd_printk_skb: 34 callbacks suppressed
	[  +8.136789] kauditd_printk_skb: 62 callbacks suppressed
	[  +3.884622] systemd-fstab-generator[2341]: Ignoring "noauto" option for root device
	[  +7.716212] kauditd_printk_skb: 70 callbacks suppressed
	[Apr29 13:11] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [fb84617b7608] <==
	{"level":"info","ts":"2024-04-29T13:10:41.102109Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T13:10:41.102155Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T13:10:41.104901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cab0b820a65a62da switched to configuration voters=(14605376041931924186)"}
	{"level":"info","ts":"2024-04-29T13:10:41.105094Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7be84cdbccca5422","local-member-id":"cab0b820a65a62da","added-peer-id":"cab0b820a65a62da","added-peer-peer-urls":["https://172.26.185.116:2380"]}
	{"level":"info","ts":"2024-04-29T13:10:41.105479Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7be84cdbccca5422","local-member-id":"cab0b820a65a62da","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:10:41.105603Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T13:10:41.174336Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T13:10:41.176615Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"cab0b820a65a62da","initial-advertise-peer-urls":["https://172.26.179.21:2380"],"listen-peer-urls":["https://172.26.179.21:2380"],"advertise-client-urls":["https://172.26.179.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.26.179.21:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T13:10:41.178477Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T13:10:41.183616Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.26.179.21:2380"}
	{"level":"info","ts":"2024-04-29T13:10:41.183639Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.26.179.21:2380"}
	{"level":"info","ts":"2024-04-29T13:10:42.203971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cab0b820a65a62da is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T13:10:42.206937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cab0b820a65a62da became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T13:10:42.207179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cab0b820a65a62da received MsgPreVoteResp from cab0b820a65a62da at term 2"}
	{"level":"info","ts":"2024-04-29T13:10:42.207385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cab0b820a65a62da became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T13:10:42.20749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cab0b820a65a62da received MsgVoteResp from cab0b820a65a62da at term 3"}
	{"level":"info","ts":"2024-04-29T13:10:42.207614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cab0b820a65a62da became leader at term 3"}
	{"level":"info","ts":"2024-04-29T13:10:42.207697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cab0b820a65a62da elected leader cab0b820a65a62da at term 3"}
	{"level":"info","ts":"2024-04-29T13:10:42.214171Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"cab0b820a65a62da","local-member-attributes":"{Name:multinode-409200 ClientURLs:[https://172.26.179.21:2379]}","request-path":"/0/members/cab0b820a65a62da/attributes","cluster-id":"7be84cdbccca5422","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T13:10:42.216962Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T13:10:42.217348Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T13:10:42.229865Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T13:10:42.229948Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T13:10:42.246604Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.26.179.21:2379"}
	{"level":"info","ts":"2024-04-29T13:10:42.269243Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:11:45 up 2 min,  0 users,  load average: 0.33, 0.14, 0.04
	Linux multinode-409200 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b63c557cdc84] <==
	I0429 13:10:55.832321       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.26.183.1 Flags: [] Table: 0} 
	I0429 13:11:05.847719       1 main.go:223] Handling node with IPs: map[172.26.179.21:{}]
	I0429 13:11:05.847893       1 main.go:227] handling current node
	I0429 13:11:05.847909       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 13:11:05.847918       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 13:11:05.848107       1 main.go:223] Handling node with IPs: map[172.26.183.1:{}]
	I0429 13:11:05.848120       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	I0429 13:11:15.857598       1 main.go:223] Handling node with IPs: map[172.26.179.21:{}]
	I0429 13:11:15.857705       1 main.go:227] handling current node
	I0429 13:11:15.857721       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 13:11:15.857731       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 13:11:15.858310       1 main.go:223] Handling node with IPs: map[172.26.183.1:{}]
	I0429 13:11:15.858716       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	I0429 13:11:25.872733       1 main.go:223] Handling node with IPs: map[172.26.179.21:{}]
	I0429 13:11:25.872927       1 main.go:227] handling current node
	I0429 13:11:25.872944       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 13:11:25.872954       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 13:11:25.873917       1 main.go:223] Handling node with IPs: map[172.26.183.1:{}]
	I0429 13:11:25.873997       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	I0429 13:11:35.893187       1 main.go:223] Handling node with IPs: map[172.26.179.21:{}]
	I0429 13:11:35.893236       1 main.go:227] handling current node
	I0429 13:11:35.893250       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 13:11:35.893257       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 13:11:35.893475       1 main.go:223] Handling node with IPs: map[172.26.183.1:{}]
	I0429 13:11:35.893510       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [caeb8f4bcea1] <==
	I0429 13:07:18.188692       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	I0429 13:07:28.197111       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 13:07:28.197198       1 main.go:227] handling current node
	I0429 13:07:28.197214       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 13:07:28.197222       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 13:07:28.197483       1 main.go:223] Handling node with IPs: map[172.26.183.1:{}]
	I0429 13:07:28.197519       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	I0429 13:07:38.212214       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 13:07:38.212403       1 main.go:227] handling current node
	I0429 13:07:38.212438       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 13:07:38.212517       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 13:07:38.212931       1 main.go:223] Handling node with IPs: map[172.26.183.1:{}]
	I0429 13:07:38.212966       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	I0429 13:07:48.220683       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 13:07:48.221184       1 main.go:227] handling current node
	I0429 13:07:48.221203       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 13:07:48.221601       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 13:07:48.222155       1 main.go:223] Handling node with IPs: map[172.26.183.1:{}]
	I0429 13:07:48.222249       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	I0429 13:07:58.238686       1 main.go:223] Handling node with IPs: map[172.26.185.116:{}]
	I0429 13:07:58.238850       1 main.go:227] handling current node
	I0429 13:07:58.238867       1 main.go:223] Handling node with IPs: map[172.26.183.208:{}]
	I0429 13:07:58.238876       1 main.go:250] Node multinode-409200-m02 has CIDR [10.244.1.0/24] 
	I0429 13:07:58.239479       1 main.go:223] Handling node with IPs: map[172.26.183.1:{}]
	I0429 13:07:58.239502       1 main.go:250] Node multinode-409200-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [bb90fe6bf6c1] <==
	I0429 13:10:44.685333       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 13:10:44.685969       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 13:10:44.707199       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 13:10:44.707843       1 aggregator.go:165] initial CRD sync complete...
	I0429 13:10:44.708132       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 13:10:44.708390       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 13:10:44.708575       1 cache.go:39] Caches are synced for autoregister controller
	I0429 13:10:44.724152       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 13:10:44.733605       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 13:10:44.735851       1 policy_source.go:224] refreshing policies
	I0429 13:10:44.760446       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 13:10:44.784177       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 13:10:44.784932       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 13:10:44.784968       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 13:10:44.788921       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 13:10:45.590622       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0429 13:10:46.514926       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.26.179.21 172.26.185.116]
	I0429 13:10:46.517201       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 13:10:46.532765       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 13:10:48.118511       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 13:10:48.437621       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 13:10:48.467577       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 13:10:48.597526       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 13:10:48.609834       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0429 13:11:06.516947       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.26.179.21]
	
	
	==> kube-controller-manager [5adb6a9084e4] <==
	I0429 12:44:48.225494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.476922ms"
	I0429 12:44:48.261461       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.901256ms"
	I0429 12:44:48.261977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="350.603µs"
	I0429 12:45:01.593292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.901µs"
	I0429 12:45:01.625573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="248.901µs"
	I0429 12:45:03.575482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.075381ms"
	I0429 12:45:03.577737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.2µs"
	I0429 12:45:06.222594       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 12:47:49.237379       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-409200-m02\" does not exist"
	I0429 12:47:49.263216       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-409200-m02" podCIDRs=["10.244.1.0/24"]
	I0429 12:47:51.255160       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-409200-m02"
	I0429 12:48:12.497091       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-409200-m02"
	I0429 12:48:39.315624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.709457ms"
	I0429 12:48:39.348543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.825151ms"
	I0429 12:48:39.350006       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="182.599µs"
	I0429 12:48:41.641664       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.408001ms"
	I0429 12:48:41.641949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.401µs"
	I0429 12:48:41.676091       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.426762ms"
	I0429 12:48:41.676205       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.201µs"
	I0429 12:52:37.159818       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-409200-m03\" does not exist"
	I0429 12:52:37.160747       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-409200-m02"
	I0429 12:52:37.177713       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-409200-m03" podCIDRs=["10.244.2.0/24"]
	I0429 12:52:41.323171       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-409200-m03"
	I0429 12:52:56.218996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-409200-m03"
	I0429 13:00:36.459927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-409200-m02"
	
	
	==> kube-controller-manager [a70dddc97b18] <==
	I0429 13:10:57.894987       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 13:10:57.895022       1 shared_informer.go:320] Caches are synced for job
	I0429 13:10:57.902914       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0429 13:10:57.941083       1 shared_informer.go:320] Caches are synced for HPA
	I0429 13:10:57.959595       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 13:10:58.007926       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 13:10:58.028084       1 shared_informer.go:320] Caches are synced for daemon sets
	I0429 13:10:58.028426       1 shared_informer.go:320] Caches are synced for taint
	I0429 13:10:58.028561       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0429 13:10:58.064969       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-409200"
	I0429 13:10:58.065099       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-409200-m02"
	I0429 13:10:58.065203       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-409200-m03"
	I0429 13:10:58.065256       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0429 13:10:58.098719       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0429 13:10:58.490127       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 13:10:58.490161       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 13:10:58.524457       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 13:11:03.862762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.48675ms"
	I0429 13:11:03.862997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.099µs"
	I0429 13:11:03.912839       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="179.698µs"
	I0429 13:11:03.985191       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.667246ms"
	I0429 13:11:03.987384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.1µs"
	I0429 13:11:05.278428       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-409200-m02"
	I0429 13:11:38.155534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.373575ms"
	I0429 13:11:38.156304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35µs"
	
	
	==> kube-proxy [136039c02f78] <==
	I0429 13:10:47.565917       1 server_linux.go:69] "Using iptables proxy"
	I0429 13:10:47.618554       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.26.179.21"]
	I0429 13:10:47.771915       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 13:10:47.771954       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 13:10:47.772019       1 server_linux.go:165] "Using iptables Proxier"
	I0429 13:10:47.785505       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 13:10:47.786685       1 server.go:872] "Version info" version="v1.30.0"
	I0429 13:10:47.786885       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 13:10:47.792083       1 config.go:192] "Starting service config controller"
	I0429 13:10:47.792533       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 13:10:47.793016       1 config.go:101] "Starting endpoint slice config controller"
	I0429 13:10:47.793061       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 13:10:47.808048       1 config.go:319] "Starting node config controller"
	I0429 13:10:47.808067       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 13:10:47.893771       1 shared_informer.go:320] Caches are synced for service config
	I0429 13:10:47.893881       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 13:10:47.908252       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [3ba8caba4bc5] <==
	I0429 12:44:49.113215       1 server_linux.go:69] "Using iptables proxy"
	I0429 12:44:49.178365       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.26.185.116"]
	I0429 12:44:49.235481       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 12:44:49.235656       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 12:44:49.235683       1 server_linux.go:165] "Using iptables Proxier"
	I0429 12:44:49.240257       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 12:44:49.243830       1 server.go:872] "Version info" version="v1.30.0"
	I0429 12:44:49.243910       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 12:44:49.247315       1 config.go:192] "Starting service config controller"
	I0429 12:44:49.248504       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 12:44:49.248691       1 config.go:101] "Starting endpoint slice config controller"
	I0429 12:44:49.248945       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 12:44:49.251257       1 config.go:319] "Starting node config controller"
	I0429 12:44:49.251298       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 12:44:49.349845       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 12:44:49.349850       1 shared_informer.go:320] Caches are synced for service config
	I0429 12:44:49.351890       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [315326a1ce10] <==
	E0429 12:44:30.427377       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 12:44:30.447600       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 12:44:30.448660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 12:44:30.467546       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 12:44:30.467843       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 12:44:30.543006       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 12:44:30.543577       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 12:44:30.596529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 12:44:30.596652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 12:44:30.643354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 12:44:30.643664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 12:44:30.668341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 12:44:30.668936       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 12:44:30.756255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 12:44:30.756684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 12:44:30.842695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 12:44:30.842746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 12:44:30.878228       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 12:44:30.878284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 12:44:30.878602       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 12:44:30.878712       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 12:44:30.990384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 12:44:30.990868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0429 12:44:32.117111       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 13:08:03.394394       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d074bfb341af] <==
	I0429 13:10:41.693134       1 serving.go:380] Generated self-signed cert in-memory
	W0429 13:10:44.634173       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 13:10:44.634247       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 13:10:44.634261       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 13:10:44.634270       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 13:10:44.717932       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 13:10:44.718149       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 13:10:44.723740       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 13:10:44.724626       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 13:10:44.724858       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 13:10:44.723983       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 13:10:44.826346       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 13:10:51 multinode-409200 kubelet[1533]: E0429 13:10:51.781422    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-ctb8n" podUID="1141a626-d4ac-4826-a912-7b7ed378b013"
	Apr 29 13:10:51 multinode-409200 kubelet[1533]: E0429 13:10:51.781538    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gr44t" podUID="0702453a-eae6-44a3-893d-10d040074461"
	Apr 29 13:10:53 multinode-409200 kubelet[1533]: E0429 13:10:53.236634    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 29 13:10:53 multinode-409200 kubelet[1533]: E0429 13:10:53.236853    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1141a626-d4ac-4826-a912-7b7ed378b013-config-volume podName:1141a626-d4ac-4826-a912-7b7ed378b013 nodeName:}" failed. No retries permitted until 2024-04-29 13:11:01.236830686 +0000 UTC m=+22.822464808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1141a626-d4ac-4826-a912-7b7ed378b013-config-volume") pod "coredns-7db6d8ff4d-ctb8n" (UID: "1141a626-d4ac-4826-a912-7b7ed378b013") : object "kube-system"/"coredns" not registered
	Apr 29 13:10:53 multinode-409200 kubelet[1533]: E0429 13:10:53.337523    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Apr 29 13:10:53 multinode-409200 kubelet[1533]: E0429 13:10:53.337642    1533 projected.go:200] Error preparing data for projected volume kube-api-access-p48cj for pod default/busybox-fc5497c4f-gr44t: object "default"/"kube-root-ca.crt" not registered
	Apr 29 13:10:53 multinode-409200 kubelet[1533]: E0429 13:10:53.337713    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0702453a-eae6-44a3-893d-10d040074461-kube-api-access-p48cj podName:0702453a-eae6-44a3-893d-10d040074461 nodeName:}" failed. No retries permitted until 2024-04-29 13:11:01.337694492 +0000 UTC m=+22.923328514 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-p48cj" (UniqueName: "kubernetes.io/projected/0702453a-eae6-44a3-893d-10d040074461-kube-api-access-p48cj") pod "busybox-fc5497c4f-gr44t" (UID: "0702453a-eae6-44a3-893d-10d040074461") : object "default"/"kube-root-ca.crt" not registered
	Apr 29 13:10:53 multinode-409200 kubelet[1533]: E0429 13:10:53.779606    1533 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Apr 29 13:10:53 multinode-409200 kubelet[1533]: E0429 13:10:53.780396    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gr44t" podUID="0702453a-eae6-44a3-893d-10d040074461"
	Apr 29 13:10:53 multinode-409200 kubelet[1533]: E0429 13:10:53.781222    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-ctb8n" podUID="1141a626-d4ac-4826-a912-7b7ed378b013"
	Apr 29 13:10:55 multinode-409200 kubelet[1533]: E0429 13:10:55.781537    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-ctb8n" podUID="1141a626-d4ac-4826-a912-7b7ed378b013"
	Apr 29 13:10:55 multinode-409200 kubelet[1533]: E0429 13:10:55.782360    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gr44t" podUID="0702453a-eae6-44a3-893d-10d040074461"
	Apr 29 13:10:57 multinode-409200 kubelet[1533]: E0429 13:10:57.781165    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gr44t" podUID="0702453a-eae6-44a3-893d-10d040074461"
	Apr 29 13:10:57 multinode-409200 kubelet[1533]: E0429 13:10:57.782404    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-ctb8n" podUID="1141a626-d4ac-4826-a912-7b7ed378b013"
	Apr 29 13:11:18 multinode-409200 kubelet[1533]: I0429 13:11:18.127365    1533 scope.go:117] "RemoveContainer" containerID="5a03c0724371bcb9dad13d07dbf8a8f1e06591ac4d43508c632ae02a7f0ce097"
	Apr 29 13:11:18 multinode-409200 kubelet[1533]: I0429 13:11:18.127874    1533 scope.go:117] "RemoveContainer" containerID="8f7ea198e63de82d0eb8176a982c3e474bc13c86394c82293b19827180ef25e8"
	Apr 29 13:11:18 multinode-409200 kubelet[1533]: E0429 13:11:18.128075    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a200a31d-7fe5-4ebd-b4ea-f8ae593de3f9)\"" pod="kube-system/storage-provisioner" podUID="a200a31d-7fe5-4ebd-b4ea-f8ae593de3f9"
	Apr 29 13:11:29 multinode-409200 kubelet[1533]: I0429 13:11:29.781190    1533 scope.go:117] "RemoveContainer" containerID="8f7ea198e63de82d0eb8176a982c3e474bc13c86394c82293b19827180ef25e8"
	Apr 29 13:11:38 multinode-409200 kubelet[1533]: I0429 13:11:38.741616    1533 scope.go:117] "RemoveContainer" containerID="390664a859132d447f655ea904c9523c607371abda774b4a706361587d5e720d"
	Apr 29 13:11:38 multinode-409200 kubelet[1533]: I0429 13:11:38.794918    1533 scope.go:117] "RemoveContainer" containerID="030b6d42f50f921a6679c30a24193b23e0d34850072d6699df102198ec4978cb"
	Apr 29 13:11:38 multinode-409200 kubelet[1533]: E0429 13:11:38.814925    1533 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 13:11:38 multinode-409200 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 13:11:38 multinode-409200 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 13:11:38 multinode-409200 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 13:11:38 multinode-409200 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 13:11:37.217780    6404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-409200 -n multinode-409200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-409200 -n multinode-409200: (12.2200437s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-409200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (366.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (10800.526s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.154916441.exe start -p running-upgrade-899400 --memory=2200 --vm-driver=hyperv
E0429 13:31:27.493904    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 13:32:24.795100    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.154916441.exe start -p running-upgrade-899400 --memory=2200 --vm-driver=hyperv: (8m19.4786344s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-899400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0429 13:37:24.793900    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 13:37:50.778424    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 13:38:48.031622    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
panic: test timed out after 3h0m0s
running tests:
	TestKubernetesUpgrade (10m46s)
	TestPause (5m54s)
	TestPause/serial (5m54s)
	TestPause/serial/Start (5m54s)
	TestRunningBinaryUpgrade (10m46s)
	TestStartStop (5m54s)
	TestStoppedBinaryUpgrade (5m39s)
	TestStoppedBinaryUpgrade/Upgrade (5m38s)

                                                
                                                
goroutine 2177 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 6 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00002e9c0, 0xc0012a3bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000664420, {0x490d540, 0x2a, 0x2a}, {0x25d8526?, 0x41806f?, 0x4930760?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0001197c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0001197c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000114f00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 69 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 27
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 631 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc000672af0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00002e820)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00002e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc00002e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc00002e820, 0x2feafb8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 633 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc000672af0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00002f520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00002f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc00002f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc00002f520, 0x2feafc8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 635 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc000672af0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00002f860)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00002f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc00002f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc00002f860, 0x2feaff0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 152 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 151
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 843 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3565da0, 0xc000224300}, 0xc002aa5f50, 0xc002aa5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3565da0, 0xc000224300}, 0x90?, 0xc002aa5f50, 0xc002aa5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3565da0?, 0xc000224300?}, 0xc002aa5fb0?, 0x8f6448?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x8f63fb?, 0xc0028bc000?, 0xc002832c00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 898
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 897 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0021c8cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 835
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 150 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0004a4a90, 0x3d)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2074be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0006770e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0004a4ac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021676b0, {0x35423a0, 0xc000674180}, 0x1, 0xc000224300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021676b0, 0x3b9aca00, 0x0, 0x1, 0xc000224300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 128
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 151 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3565da0, 0xc000224300}, 0xc0022e5f50, 0xc0022e5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3565da0, 0xc000224300}, 0xa0?, 0xc0022e5f50, 0xc0022e5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3565da0?, 0xc000224300?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0022e5fd0?, 0x4ee404?, 0xc0022c22a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 128
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2170 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0007c46e0, 0xc002aaa600)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2167
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 844 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 843
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 127 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000677200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 135
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 128 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0004a4ac0, 0xc000224300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 135
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2095 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0022e1b20?, 0x377ea5?, 0x49bdbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0022e1b4d?, 0xc0022e1b80?, 0x36fdd6?, 0x49bdbc0?, 0xc0022e1c08?, 0x362985?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x468, {0xc00290423e?, 0x5c2, 0x41417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002194508?, {0xc00290423e?, 0x398?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002194508, {0xc00290423e, 0x5c2, 0x5c2})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0004f8180, {0xc00290423e?, 0xc0007c0230?, 0x23d?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00249ee40, {0x3540f60, 0xc0004f8258})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35410a0, 0xc00249ee40}, {0x3540f60, 0xc0004f8258}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x3d6a33?, {0x35410a0, 0xc00249ee40})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x48c1840?, {0x35410a0?, 0xc00249ee40?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x35410a0, 0xc00249ee40}, {0x3541020, 0xc0004f8180}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000a20790?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2094
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2094 [syscall, 6 minutes, locked to thread]:
syscall.SyscallN(0x7ffc3a434de0?, {0xc0020856a8?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x34c, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0020ec510)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000a1a000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000a1a000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00002e4e0, 0xc000a1a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2.1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:183 +0x385
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc002085c20?, {0x354e798, 0xc0000aa5a0}, 0x2fec288, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x354e798?, 0xc0000aa5a0?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc0022a5e28, 0x3b9aca00, 0x1a3185c5000, {0xc0022a5d08?, 0x2074be0?, 0x3af288?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xef
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc00002e4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:188 +0x2de
testing.tRunner(0xc00002e4e0, 0xc00079c600)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2146
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 632 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc000672af0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00002f380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00002f380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc00002f380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc00002f380, 0x2feafb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 898 [chan receive, 128 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002000000, 0xc000224300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 835
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2044 [chan receive, 6 minutes]:
testing.(*T).Run(0xc002742b60, {0x257def5?, 0xd18c2e2800?}, 0xc0022ee840)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc002742b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc002742b60, 0x2feb0b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2211 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x2397da7ae68?, {0xc0009b3b20?, 0x377ea5?, 0x49bdbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x2397da7ae68?, 0xc0009b3b80?, 0x36fdd6?, 0x49bdbc0?, 0xc0009b3c08?, 0x362985?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x68c, {0xc0012b420f?, 0x1df1, 0x41417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0022c5688?, {0xc0012b420f?, 0x39c1be?, 0x4000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0022c5688, {0xc0012b420f, 0x1df1, 0x1df1})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0007cfe58, {0xc0012b420f?, 0xc0009b3d98?, 0x1e37?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00249e120, {0x3540f60, 0xc0004f8058})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35410a0, 0xc00249e120}, {0x3540f60, 0xc0004f8058}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x35410a0, 0xc00249e120})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x48c1840?, {0x35410a0?, 0xc00249e120?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x35410a0, 0xc00249e120}, {0x3541020, 0xc0007cfe58}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0031dc960?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2113
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2166 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0012fed00, {0x257c9f6?, 0x63?}, 0xc002584240)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc0012fed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc0012fed00, 0xc0022ee840)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2044
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1283 [chan send, 124 minutes]:
os/exec.(*Cmd).watchCtx(0xc0031a26e0, 0xc0031dcb40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1282
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 842 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00075a650, 0x30)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2074be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0021c8ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002000000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021502e0, {0x35423a0, 0xc001fe81e0}, 0x1, 0xc000224300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021502e0, 0x3b9aca00, 0x0, 0x1, 0xc000224300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 898
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 634 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc000672af0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00002f6c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00002f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc00002f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:83 +0x92
testing.tRunner(0xc00002f6c0, 0x2feaff8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2161 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc000672af0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002743040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002743040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002743040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002743040, 0xc0025840c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2159
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2165 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc000672af0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0012feb60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0012feb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0012feb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0012feb60, 0xc002584200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2159
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 715 [IO wait, 163 minutes]:
internal/poll.runtime_pollWait(0x2397d77daa0, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000510c08?, 0x0?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0022f0ca0, 0xc0022dfbb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc0022f0c88, 0x38c, {0xc0008250e0?, 0x0?, 0x0?}, 0xc000510808?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc0022f0c88, 0xc0022dfd90)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc0022f0c88)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc002b0c460)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc002b0c460)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0008a80f0, {0x3558e40, 0xc002b0c460})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0008a80f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0012fe9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 712
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2196 [syscall, 6 minutes, locked to thread]:
syscall.SyscallN(0xc002021000?, {0xc002899b20?, 0xc002899be8?, 0x67e71e?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1?, 0x0?, 0x16?, 0xc000665170?, 0xc002899c08?, 0x36281b?, 0x10?, 0x10?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6d8, {0xc00096e800?, 0x200, 0xc00096e800?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002194a08?, {0xc00096e800?, 0x36281b?, 0x200?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002194a08, {0xc00096e800, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0004f81f8, {0xc00096e800?, 0xc000665170?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00249ee70, {0x3540f60, 0xc0007cfeb8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35410a0, 0xc00249ee70}, {0x3540f60, 0xc0007cfeb8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x10?, {0x35410a0, 0xc00249ee70})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x48c1840?, {0x35410a0?, 0xc00249ee70?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x35410a0, 0xc00249ee70}, {0x3541020, 0xc0004f81f8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x2feb0a8?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2094
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2169 [syscall, 6 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc00289db20?, 0x377ea5?, 0x49bdbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4ec457?, 0xc00289db80?, 0x36fdd6?, 0x49bdbc0?, 0xc00289dc08?, 0x362985?, 0x23938160eb8?, 0xc00011e641?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6a4, {0xc00095b53a?, 0x2c6, 0xc00095b400?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0022c4c88?, {0xc00095b53a?, 0x395170?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0022c4c88, {0xc00095b53a, 0x2c6, 0x2c6})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000110df0, {0xc00095b53a?, 0xc00289dd98?, 0x13a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0022eea20, {0x3540f60, 0xc0007cfe50})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35410a0, 0xc0022eea20}, {0x3540f60, 0xc0007cfe50}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x35410a0, 0xc0022eea20})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x48c1840?, {0x35410a0?, 0xc0022eea20?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x35410a0, 0xc0022eea20}, {0x3541020, 0xc000110df0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x2feb070?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2167
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2160 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc000672af0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002742ea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002742ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002742ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002742ea0, 0xc002584080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2159
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2197 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a1a000, 0xc0031dd8c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2094
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2210 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x418140?, {0xc0027a5b20?, 0x377ea5?, 0x49bdbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0027a5b4d?, 0xc0027a5b80?, 0x36fdd6?, 0x49bdbc0?, 0xc0027a5c08?, 0x362985?, 0x23938160eb8?, 0x4d?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x67c, {0xc00200626f?, 0x591, 0x41417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0022c4a08?, {0xc00200626f?, 0x0?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0022c4a08, {0xc00200626f, 0x591, 0x591})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0007cfe38, {0xc00200626f?, 0x2397d7760e8?, 0x20c?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00249e0f0, {0x3540f60, 0xc000670030})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35410a0, 0xc00249e0f0}, {0x3540f60, 0xc000670030}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x35410a0, 0xc00249e0f0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x48c1840?, {0x35410a0?, 0xc00249e0f0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x35410a0, 0xc00249e0f0}, {0x3541020, 0xc0007cfe38}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002a620c0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2113
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2042 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc000672af0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002742680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002742680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc002742680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:47 +0x39
testing.tRunner(0xc002742680, 0x2feb098)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2168 [syscall, locked to thread]:
syscall.SyscallN(0xc000a0e958?, {0xc002129b20?, 0x377ea5?, 0x49bdbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x35?, 0xc002129b80?, 0x36fdd6?, 0x49bdbc0?, 0xc002129c08?, 0x362985?, 0x23938160108?, 0xc002fcff4d?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x69c, {0xc002006b28?, 0x4d8, 0x41417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0022c4788?, {0xc002006b28?, 0x0?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0022c4788, {0xc002006b28, 0x4d8, 0x4d8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000110db8, {0xc002006b28?, 0x2397d7760e8?, 0x227?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0022ee990, {0x3540f60, 0xc000670028})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35410a0, 0xc0022ee990}, {0x3540f60, 0xc000670028}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x35410a0, 0xc0022ee990})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x48c1840?, {0x35410a0?, 0xc0022ee990?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x35410a0, 0xc0022ee990}, {0x3541020, 0xc000110db8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0022c2300?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2167
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1003 [chan send, 122 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a1a000, 0xc0025d20c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 859
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2147 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x7ffc3a434de0?, {0xc002213798?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x5f0, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0020ecc00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0031a2000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0031a2000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002743860, 0xc0031a2000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc002743860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:243 +0xaff
testing.tRunner(0xc002743860, 0x2feb060)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2164 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc000672af0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002743d40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002743d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002743d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002743d40, 0xc002584180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2159
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2140 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0012c9b20?, 0x46af25?, 0x49?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0023bdb08?, 0x45?, 0x120?, 0xa?, 0xc0012c9c08?, 0x36281b?, 0x358ba6?, 0xc002fcff80?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6dc, {0xc002007a0e?, 0x5f2, 0xc002007800?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc00214c508?, {0xc002007a0e?, 0x39c1be?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00214c508, {0xc002007a0e, 0x5f2, 0x5f2})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0004f82b8, {0xc002007a0e?, 0xc0012c9d98?, 0x20e?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002b220f0, {0x3540f60, 0xc0004f83b8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35410a0, 0xc002b220f0}, {0x3540f60, 0xc0004f83b8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x35410a0, 0xc002b220f0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x48c1840?, {0x35410a0?, 0xc002b220f0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x35410a0, 0xc002b220f0}, {0x3541020, 0xc0004f82b8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002aaa240?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2147
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2167 [syscall, 6 minutes, locked to thread]:
syscall.SyscallN(0x7ffc3a434de0?, {0xc0012e9a78?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6bc, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0027ca630)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0007c46e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0007c46e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0012ff040, 0xc0007c46e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFreshStart({0x3565be0, 0xc0003b2770}, 0xc0012ff040, {0xc00265c150, 0xc})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:80 +0x275
k8s.io/minikube/test/integration.TestPause.func1.1(0xc0012ff040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc0012ff040, 0xc002584240)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2166
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2146 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0027436c0, {0x2580991?, 0x3005753e800?}, 0xc00079c600)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0027436c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:160 +0x2bc
testing.tRunner(0xc0027436c0, 0x2feb0e8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2111 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0027431e0, {0x257c9f1?, 0x4a7333?}, 0x2feb2b8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0027431e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0027431e0, 0x2feb0e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2163 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc000672af0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002743a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002743a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002743a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002743a00, 0xc002584140)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2159
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2097 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc0031a2000, 0xc002a62060)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2147
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2113 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffc3a434de0?, {0xc00220f960?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6e8, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0027ca6f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000644dc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000644dc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002743520, 0xc000644dc0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc002743520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:130 +0x788
testing.tRunner(0xc002743520, 0x2feb0c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2096 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc002807b20?, 0x377ea5?, 0x49bdbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc003488e41?, 0xc002807b80?, 0x36fdd6?, 0x49bdbc0?, 0xc002807c08?, 0x362985?, 0x23938160598?, 0x77?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6e4, {0xc0025e41ac?, 0x1e54, 0x41417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc00214ca08?, {0xc0025e41ac?, 0x39c1be?, 0x4000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00214ca08, {0xc0025e41ac, 0x1e54, 0x1e54})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0004f8340, {0xc0025e41ac?, 0xc003151180?, 0x1ea2?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002b22120, {0x3540f60, 0xc000110d48})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35410a0, 0xc002b22120}, {0x3540f60, 0xc000110d48}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc002807e78?, {0x35410a0, 0xc002b22120})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x48c1840?, {0x35410a0?, 0xc002b22120?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x35410a0, 0xc002b22120}, {0x3541020, 0xc0004f8340}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002aaa0c0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2147
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2162 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc000672af0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002743380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002743380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002743380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002743380, 0xc002584100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2159
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2159 [chan receive, 6 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc002742d00, 0x2feb2b8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2111
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2212 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc000644dc0, 0xc002aaa0c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2113
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (302.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-899400 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-899400 --driver=hyperv: exit status 1 (4m59.6406278s)

                                                
                                                
-- stdout --
	* [NoKubernetes-899400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-899400" primary control-plane node in "NoKubernetes-899400" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 13:28:27.024396   10896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-899400 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-899400 -n NoKubernetes-899400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-899400 -n NoKubernetes-899400: exit status 7 (3.2752589s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 13:33:26.637131    3452 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0429 13:33:29.755449    3452 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-899400".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-899400 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-899400:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-899400" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (302.92s)

                                                
                                    

Test pass (140/190)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.14
4 TestDownloadOnly/v1.20.0/preload-exists 0.09
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.53
9 TestDownloadOnly/v1.20.0/DeleteAll 1.43
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.34
12 TestDownloadOnly/v1.30.0/json-events 10.94
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.3
18 TestDownloadOnly/v1.30.0/DeleteAll 1.24
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 1.2
21 TestBinaryMirror 7.21
22 TestOffline 291.36
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.32
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.3
27 TestAddons/Setup 392.1
30 TestAddons/parallel/Ingress 65.04
31 TestAddons/parallel/InspektorGadget 25.72
32 TestAddons/parallel/MetricsServer 21.78
33 TestAddons/parallel/HelmTiller 33.48
35 TestAddons/parallel/CSI 108.93
36 TestAddons/parallel/Headlamp 33.19
37 TestAddons/parallel/CloudSpanner 21.23
38 TestAddons/parallel/LocalPath 87.13
39 TestAddons/parallel/NvidiaDevicePlugin 20.59
40 TestAddons/parallel/Yakd 5.02
43 TestAddons/serial/GCPAuth/Namespaces 0.35
44 TestAddons/StoppedEnableDisable 54.47
56 TestErrorSpam/start 17.45
57 TestErrorSpam/status 36.19
58 TestErrorSpam/pause 22.96
59 TestErrorSpam/unpause 23.2
60 TestErrorSpam/stop 55.41
63 TestFunctional/serial/CopySyncFile 0.04
64 TestFunctional/serial/StartWithProxy 238.81
65 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/KubeContext 0.14
71 TestFunctional/serial/CacheCmd/cache/add_remote 348.94
72 TestFunctional/serial/CacheCmd/cache/add_local 60.81
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.27
74 TestFunctional/serial/CacheCmd/cache/list 0.28
77 TestFunctional/serial/CacheCmd/cache/delete 0.55
80 TestFunctional/serial/ExtraConfig 145.42
81 TestFunctional/serial/ComponentHealth 0.26
82 TestFunctional/serial/LogsCmd 8.56
83 TestFunctional/serial/LogsFileCmd 10.74
84 TestFunctional/serial/InvalidService 20.62
90 TestFunctional/parallel/StatusCmd 41.4
94 TestFunctional/parallel/ServiceCmdConnect 27.77
95 TestFunctional/parallel/AddonsCmd 0.78
96 TestFunctional/parallel/PersistentVolumeClaim 45.59
98 TestFunctional/parallel/SSHCmd 22.04
99 TestFunctional/parallel/CpCmd 59.68
100 TestFunctional/parallel/MySQL 70.91
101 TestFunctional/parallel/FileSync 10.41
102 TestFunctional/parallel/CertSync 60.9
106 TestFunctional/parallel/NodeLabels 0.2
108 TestFunctional/parallel/NonActiveRuntimeDisabled 9.81
110 TestFunctional/parallel/License 3.13
111 TestFunctional/parallel/ServiceCmd/DeployApp 19.47
112 TestFunctional/parallel/ProfileCmd/profile_not_create 11.67
113 TestFunctional/parallel/ProfileCmd/profile_list 10.93
114 TestFunctional/parallel/ServiceCmd/List 14.12
115 TestFunctional/parallel/ProfileCmd/profile_json_output 11.57
116 TestFunctional/parallel/ServiceCmd/JSONOutput 14.64
117 TestFunctional/parallel/DockerEnv/powershell 44.82
121 TestFunctional/parallel/UpdateContextCmd/no_changes 2.62
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.61
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.65
124 TestFunctional/parallel/Version/short 0.3
125 TestFunctional/parallel/Version/components 8.51
126 TestFunctional/parallel/ImageCommands/ImageListShort 8.3
127 TestFunctional/parallel/ImageCommands/ImageListTable 7.87
128 TestFunctional/parallel/ImageCommands/ImageListJson 7.97
129 TestFunctional/parallel/ImageCommands/ImageListYaml 8.17
130 TestFunctional/parallel/ImageCommands/ImageBuild 28.09
131 TestFunctional/parallel/ImageCommands/Setup 4.03
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 24.16
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 19.65
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.44
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 27.6
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.71
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.99
147 TestFunctional/parallel/ImageCommands/ImageRemove 14.79
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 16.92
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.26
150 TestFunctional/delete_addon-resizer_images 0.02
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 717.06
157 TestMultiControlPlane/serial/DeployApp 12.4
159 TestMultiControlPlane/serial/AddWorkerNode 255.29
160 TestMultiControlPlane/serial/NodeLabels 0.22
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 28.74
162 TestMultiControlPlane/serial/CopyFile 637.33
163 TestMultiControlPlane/serial/StopSecondaryNode 75.71
167 TestImageBuild/serial/Setup 203.91
168 TestImageBuild/serial/NormalBuild 9.69
169 TestImageBuild/serial/BuildWithBuildArg 9.1
170 TestImageBuild/serial/BuildWithDockerIgnore 7.68
171 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.6
175 TestJSONOutput/start/Command 244.22
176 TestJSONOutput/start/Audit 0
178 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Command 7.94
182 TestJSONOutput/pause/Audit 0
184 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Command 7.9
188 TestJSONOutput/unpause/Audit 0
190 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/stop/Command 39.53
194 TestJSONOutput/stop/Audit 0
196 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
198 TestErrorJSONOutput 1.59
203 TestMainNoArgs 0.28
204 TestMinikubeProfile 528.09
207 TestMountStart/serial/StartWithMountFirst 156.36
208 TestMountStart/serial/VerifyMountFirst 9.62
209 TestMountStart/serial/StartWithMountSecond 157.26
210 TestMountStart/serial/VerifyMountSecond 9.6
211 TestMountStart/serial/DeleteFirst 27.84
212 TestMountStart/serial/VerifyMountPostDelete 9.6
213 TestMountStart/serial/Stop 26.6
217 TestMultiNode/serial/FreshStart2Nodes 434.78
218 TestMultiNode/serial/DeployApp2Nodes 8.79
220 TestMultiNode/serial/AddNode 229.23
221 TestMultiNode/serial/MultiNodeLabels 0.19
222 TestMultiNode/serial/ProfileList 9.7
223 TestMultiNode/serial/CopyFile 365.06
224 TestMultiNode/serial/StopNode 77.73
230 TestPreload 532.32
231 TestScheduledStopWindows 335.45
241 TestNoKubernetes/serial/StartNoK8sWithVersion 0.43
x
+
TestDownloadOnly/v1.20.0/json-events (16.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-805300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-805300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (16.1429741s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-805300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-805300: exit status 85 (529.394ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-805300 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC |          |
	|         | -p download-only-805300        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 10:39:12
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 10:39:12.204917    5988 out.go:291] Setting OutFile to fd 632 ...
	I0429 10:39:12.206085    5988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 10:39:12.206195    5988 out.go:304] Setting ErrFile to fd 636...
	I0429 10:39:12.206195    5988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 10:39:12.219915    5988 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0429 10:39:12.232005    5988 out.go:298] Setting JSON to true
	I0429 10:39:12.235403    5988 start.go:129] hostinfo: {"hostname":"minikube6","uptime":28624,"bootTime":1714358527,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 10:39:12.235403    5988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 10:39:12.243685    5988 out.go:97] [download-only-805300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 10:39:12.243685    5988 notify.go:220] Checking for updates...
	I0429 10:39:12.245987    5988 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	W0429 10:39:12.243685    5988 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0429 10:39:12.250838    5988 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 10:39:12.253398    5988 out.go:169] MINIKUBE_LOCATION=18756
	I0429 10:39:12.256037    5988 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0429 10:39:12.260193    5988 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 10:39:12.262163    5988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 10:39:17.580318    5988 out.go:97] Using the hyperv driver based on user configuration
	I0429 10:39:17.580438    5988 start.go:297] selected driver: hyperv
	I0429 10:39:17.580438    5988 start.go:901] validating driver "hyperv" against <nil>
	I0429 10:39:17.580848    5988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 10:39:17.629519    5988 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0429 10:39:17.631531    5988 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 10:39:17.631531    5988 cni.go:84] Creating CNI manager for ""
	I0429 10:39:17.631531    5988 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0429 10:39:17.631531    5988 start.go:340] cluster config:
	{Name:download-only-805300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-805300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 10:39:17.632270    5988 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 10:39:17.636674    5988 out.go:97] Downloading VM boot image ...
	I0429 10:39:17.637663    5988 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 10:39:21.255650    5988 out.go:97] Starting "download-only-805300" primary control-plane node in "download-only-805300" cluster
	I0429 10:39:21.256304    5988 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 10:39:21.305053    5988 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0429 10:39:21.305557    5988 cache.go:56] Caching tarball of preloaded images
	I0429 10:39:21.305731    5988 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 10:39:21.311256    5988 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0429 10:39:21.311256    5988 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 10:39:21.382253    5988 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0429 10:39:24.962150    5988 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 10:39:24.963718    5988 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 10:39:25.998588    5988 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0429 10:39:25.999833    5988 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-805300\config.json ...
	I0429 10:39:26.000522    5988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-805300\config.json: {Name:mkda821f79d70ea02e38361a70b865fa39795dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:39:26.001731    5988 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 10:39:26.002545    5988 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-805300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-805300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 10:39:28.352811    2312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.4276426s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-805300
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-805300: (1.3419326s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (10.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-614800 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-614800 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv: (10.9420996s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (10.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-614800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-614800: exit status 85 (299.7415ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-805300 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC |                     |
	|         | -p download-only-805300        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC | 29 Apr 24 10:39 UTC |
	| delete  | -p download-only-805300        | download-only-805300 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC | 29 Apr 24 10:39 UTC |
	| start   | -o=json --download-only        | download-only-614800 | minikube6\jenkins | v1.33.0 | 29 Apr 24 10:39 UTC |                     |
	|         | -p download-only-614800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 10:39:31
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 10:39:31.729985    4932 out.go:291] Setting OutFile to fd 780 ...
	I0429 10:39:31.730616    4932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 10:39:31.730616    4932 out.go:304] Setting ErrFile to fd 784...
	I0429 10:39:31.730616    4932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 10:39:31.754425    4932 out.go:298] Setting JSON to true
	I0429 10:39:31.756808    4932 start.go:129] hostinfo: {"hostname":"minikube6","uptime":28644,"bootTime":1714358527,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 10:39:31.756808    4932 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 10:39:31.762722    4932 out.go:97] [download-only-614800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 10:39:31.763056    4932 notify.go:220] Checking for updates...
	I0429 10:39:31.765160    4932 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 10:39:31.768276    4932 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 10:39:31.773403    4932 out.go:169] MINIKUBE_LOCATION=18756
	I0429 10:39:31.775893    4932 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0429 10:39:31.780378    4932 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 10:39:31.780378    4932 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 10:39:37.184934    4932 out.go:97] Using the hyperv driver based on user configuration
	I0429 10:39:37.184934    4932 start.go:297] selected driver: hyperv
	I0429 10:39:37.184934    4932 start.go:901] validating driver "hyperv" against <nil>
	I0429 10:39:37.184934    4932 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 10:39:37.234211    4932 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0429 10:39:37.235772    4932 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 10:39:37.235772    4932 cni.go:84] Creating CNI manager for ""
	I0429 10:39:37.235772    4932 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 10:39:37.235772    4932 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 10:39:37.235772    4932 start.go:340] cluster config:
	{Name:download-only-614800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-614800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 10:39:37.235772    4932 iso.go:125] acquiring lock: {Name:mk3084483c03f30539a482c8227910653d175657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 10:39:37.239496    4932 out.go:97] Starting "download-only-614800" primary control-plane node in "download-only-614800" cluster
	I0429 10:39:37.239496    4932 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 10:39:37.285814    4932 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 10:39:37.285814    4932 cache.go:56] Caching tarball of preloaded images
	I0429 10:39:37.286602    4932 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 10:39:37.288914    4932 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0429 10:39:37.288914    4932 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 10:39:37.357750    4932 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 10:39:40.492659    4932 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 10:39:40.493662    4932 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 10:39:41.454583    4932 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 10:39:41.456069    4932 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-614800\config.json ...
	I0429 10:39:41.456926    4932 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-614800\config.json: {Name:mk306162f4cba1ced91b9bd82fd3cd6a4e213b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 10:39:41.457217    4932 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 10:39:41.458346    4932 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.30.0/kubectl.exe
	
	
	* The control-plane node download-only-614800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-614800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 10:39:42.591442    2168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (1.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2409971s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (1.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-614800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-614800: (1.2019522s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.20s)

                                                
                                    
x
+
TestBinaryMirror (7.21s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-922900 --alsologtostderr --binary-mirror http://127.0.0.1:56167 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-922900 --alsologtostderr --binary-mirror http://127.0.0.1:56167 --driver=hyperv: (6.3045889s)
helpers_test.go:175: Cleaning up "binary-mirror-922900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-922900
--- PASS: TestBinaryMirror (7.21s)

                                                
                                    
x
+
TestOffline (291.36s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-899400 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-899400 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (4m9.3920954s)
helpers_test.go:175: Cleaning up "offline-docker-899400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-899400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-899400: (41.961833s)
--- PASS: TestOffline (291.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.32s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-839400
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-839400: exit status 85 (316.0563ms)

                                                
                                                
-- stdout --
	* Profile "addons-839400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-839400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 10:39:55.124534    8892 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.32s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.3s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-839400
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-839400: exit status 85 (303.2908ms)

                                                
                                                
-- stdout --
	* Profile "addons-839400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-839400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 10:39:55.121131    6236 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.30s)

                                                
                                    
x
+
TestAddons/Setup (392.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-839400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-839400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m32.096808s)
--- PASS: TestAddons/Setup (392.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (65.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-839400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-839400 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-839400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4dab4a87-ec2c-4f9e-a135-2b821a6d30db] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4dab4a87-ec2c-4f9e-a135-2b821a6d30db] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0146095s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.2706638s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-839400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0429 10:48:20.502693    8992 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-839400 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839400 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839400 ip: (2.4642738s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.26.182.147
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839400 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839400 addons disable ingress-dns --alsologtostderr -v=1: (15.6145699s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839400 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839400 addons disable ingress --alsologtostderr -v=1: (21.6183071s)
--- PASS: TestAddons/parallel/Ingress (65.04s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (25.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rvgjq" [895a74bb-fc17-4db4-aabe-9953a75526b3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0132813s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-839400
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-839400: (20.6991416s)
--- PASS: TestAddons/parallel/InspektorGadget (25.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 19.6154ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-h8b6k" [268e1989-9866-4928-adab-9f2ff85ea084] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0381347s
addons_test.go:415: (dbg) Run:  kubectl --context addons-839400 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839400 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839400 addons disable metrics-server --alsologtostderr -v=1: (16.5304235s)
--- PASS: TestAddons/parallel/MetricsServer (21.78s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (33.48s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.677ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-pgj25" [b7c7db97-d07b-461f-a8c3-475f2a651364] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0682501s
addons_test.go:473: (dbg) Run:  kubectl --context addons-839400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-839400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.9069826s)
addons_test.go:478: kubectl --context addons-839400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:473: (dbg) Run:  kubectl --context addons-839400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-839400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.5090796s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839400 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839400 addons disable helm-tiller --alsologtostderr -v=1: (15.3070487s)
--- PASS: TestAddons/parallel/HelmTiller (33.48s)

                                                
                                    
x
+
TestAddons/parallel/CSI (108.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 28.323ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-839400 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-839400 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5ad1be51-9f81-4aad-b5df-3e2a7bfe1426] Pending
helpers_test.go:344: "task-pv-pod" [5ad1be51-9f81-4aad-b5df-3e2a7bfe1426] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5ad1be51-9f81-4aad-b5df-3e2a7bfe1426] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 24.0138238s
addons_test.go:584: (dbg) Run:  kubectl --context addons-839400 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-839400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-839400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-839400 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-839400 delete pod task-pv-pod: (1.7902453s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-839400 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-839400 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-839400 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [dab539aa-d8e2-47c1-9e64-f42baab1fa1e] Pending
helpers_test.go:344: "task-pv-pod-restore" [dab539aa-d8e2-47c1-9e64-f42baab1fa1e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [dab539aa-d8e2-47c1-9e64-f42baab1fa1e] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0181152s
addons_test.go:626: (dbg) Run:  kubectl --context addons-839400 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-839400 delete pod task-pv-pod-restore: (4.554666s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-839400 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-839400 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839400 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839400 addons disable csi-hostpath-driver --alsologtostderr -v=1: (21.7601589s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839400 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839400 addons disable volumesnapshots --alsologtostderr -v=1: (16.2655609s)
--- PASS: TestAddons/parallel/CSI (108.93s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (33.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-839400 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-839400 --alsologtostderr -v=1: (16.1758977s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-j5cpk" [03a76f3a-7726-4910-a2e3-3a5c1660d9be] Pending
helpers_test.go:344: "headlamp-7559bf459f-j5cpk" [03a76f3a-7726-4910-a2e3-3a5c1660d9be] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-j5cpk" [03a76f3a-7726-4910-a2e3-3a5c1660d9be] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-j5cpk" [03a76f3a-7726-4910-a2e3-3a5c1660d9be] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.0074871s
--- PASS: TestAddons/parallel/Headlamp (33.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-rbkv5" [be26d83f-8541-45f4-b635-6f793ac7f331] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0154768s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-839400
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-839400: (16.2027116s)
--- PASS: TestAddons/parallel/CloudSpanner (21.23s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (87.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-839400 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-839400 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839400 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [923a8681-bcb8-4c46-8201-65886f8b1f65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [923a8681-bcb8-4c46-8201-65886f8b1f65] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [923a8681-bcb8-4c46-8201-65886f8b1f65] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0103237s
addons_test.go:891: (dbg) Run:  kubectl --context addons-839400 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839400 ssh "cat /opt/local-path-provisioner/pvc-728dcdb0-c080-4102-9c29-17ac82cdab32_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839400 ssh "cat /opt/local-path-provisioner/pvc-728dcdb0-c080-4102-9c29-17ac82cdab32_default_test-pvc/file1": (11.091979s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-839400 delete pod test-local-path
addons_test.go:912: (dbg) Done: kubectl --context addons-839400 delete pod test-local-path: (1.0935081s)
addons_test.go:916: (dbg) Run:  kubectl --context addons-839400 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839400 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839400 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m2.437472s)
--- PASS: TestAddons/parallel/LocalPath (87.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fp9v2" [ea0a003d-aac3-4f27-8254-896b5dab1905] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0199948s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-839400
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-839400: (15.569149s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-vzlb4" [7a6eab47-fbf0-4411-8654-608332a23838] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0221253s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-839400 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-839400 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (54.47s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-839400
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-839400: (41.8260347s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-839400
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-839400: (5.0244548s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-839400
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-839400: (4.8029657s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-839400
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-839400: (2.8182311s)
--- PASS: TestAddons/StoppedEnableDisable (54.47s)

                                                
                                    
x
+
TestErrorSpam/start (17.45s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 start --dry-run: (5.7408071s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 start --dry-run: (5.9048808s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 start --dry-run: (5.794847s)
--- PASS: TestErrorSpam/start (17.45s)

                                                
                                    
x
+
TestErrorSpam/status (36.19s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 status: (12.4405927s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 status: (11.9291486s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 status: (11.8201293s)
--- PASS: TestErrorSpam/status (36.19s)

                                                
                                    
x
+
TestErrorSpam/pause (22.96s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 pause: (7.8625976s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 pause: (7.5198291s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 pause: (7.578612s)
--- PASS: TestErrorSpam/pause (22.96s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.2s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 unpause: (7.8609368s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 unpause: (7.7437299s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 unpause: (7.5930856s)
--- PASS: TestErrorSpam/unpause (23.20s)

                                                
                                    
x
+
TestErrorSpam/stop (55.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 stop
E0429 10:56:27.420628    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 stop: (33.8033662s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 stop
E0429 10:56:55.261852    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 stop: (10.9054212s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-205500 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-205500 stop: (10.7006086s)
--- PASS: TestErrorSpam/stop (55.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8496\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (238.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-197400 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0429 11:01:27.433958    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-197400 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m58.8029242s)
--- PASS: TestFunctional/serial/StartWithProxy (238.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (348.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 cache add registry.k8s.io/pause:3.1: (1m47.9407198s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 cache add registry.k8s.io/pause:3.3
E0429 11:11:27.435276    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 cache add registry.k8s.io/pause:3.3: (2m0.5024436s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 cache add registry.k8s.io/pause:latest: (2m0.4940027s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (348.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (60.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-197400 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2602716985\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-197400 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2602716985\001: (2.2777531s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 cache add minikube-local-cache-test:functional-197400
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 cache add minikube-local-cache-test:functional-197400: (58.0406688s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 cache delete minikube-local-cache-test:functional-197400
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-197400
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (60.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.55s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (145.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-197400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0429 11:31:27.450727    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-197400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m25.4165774s)
functional_test.go:757: restart took 2m25.4170738s for "functional-197400" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (145.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-197400 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 logs: (8.5587467s)
--- PASS: TestFunctional/serial/LogsCmd (8.56s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3850877882\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3850877882\001\logs.txt: (10.7351323s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.74s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (20.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-197400 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-197400
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-197400: exit status 115 (16.6255576s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.26.179.82:32509 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:32:07.198003    4832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-197400 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (20.62s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (41.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 status: (13.6057677s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (13.9907532s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 status -o json: (13.8008821s)
--- PASS: TestFunctional/parallel/StatusCmd (41.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-197400 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-197400 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-lw2lv" [0504ed71-90d7-4f76-bf23-1b7652b400a4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-lw2lv" [0504ed71-90d7-4f76-bf23-1b7652b400a4] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.019102s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 service hello-node-connect --url: (18.2799218s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.26.179.82:32573
functional_test.go:1671: http://172.26.179.82:32573: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-lw2lv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.26.179.82:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.26.179.82:32573
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (27.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ec380831-785d-467c-ad45-99d8dcbe6f2d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0213185s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-197400 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-197400 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-197400 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-197400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8dd54942-eaca-481c-b303-1cd637941bcf] Pending
helpers_test.go:344: "sp-pod" [8dd54942-eaca-481c-b303-1cd637941bcf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8dd54942-eaca-481c-b303-1cd637941bcf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.0148055s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-197400 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-197400 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-197400 delete -f testdata/storage-provisioner/pod.yaml: (1.3087582s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-197400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [84ffb4a7-1279-4de9-9f8f-d47c86b02be5] Pending
helpers_test.go:344: "sp-pod" [84ffb4a7-1279-4de9-9f8f-d47c86b02be5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [84ffb4a7-1279-4de9-9f8f-d47c86b02be5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.008547s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-197400 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.59s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (22.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 ssh "echo hello": (10.1914039s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 ssh "cat /etc/hostname": (11.8443226s)
--- PASS: TestFunctional/parallel/SSHCmd (22.04s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (59.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.9213511s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh -n functional-197400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 ssh -n functional-197400 "sudo cat /home/docker/cp-test.txt": (10.141301s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 cp functional-197400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd2556525290\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 cp functional-197400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd2556525290\001\cp-test.txt: (10.9818551s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh -n functional-197400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 ssh -n functional-197400 "sudo cat /home/docker/cp-test.txt": (10.9428714s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.2921551s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh -n functional-197400 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 ssh -n functional-197400 "sudo cat /tmp/does/not/exist/cp-test.txt": (10.3890872s)
--- PASS: TestFunctional/parallel/CpCmd (59.68s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (70.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-197400 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-h2kqt" [c34a28c2-abb7-4776-8d09-aa628c1090e9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-h2kqt" [c34a28c2-abb7-4776-8d09-aa628c1090e9] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 49.0116761s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-197400 exec mysql-64454c8b5c-h2kqt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-197400 exec mysql-64454c8b5c-h2kqt -- mysql -ppassword -e "show databases;": exit status 1 (337.2098ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-197400 exec mysql-64454c8b5c-h2kqt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-197400 exec mysql-64454c8b5c-h2kqt -- mysql -ppassword -e "show databases;": exit status 1 (352.2915ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-197400 exec mysql-64454c8b5c-h2kqt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-197400 exec mysql-64454c8b5c-h2kqt -- mysql -ppassword -e "show databases;": exit status 1 (763.0375ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-197400 exec mysql-64454c8b5c-h2kqt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-197400 exec mysql-64454c8b5c-h2kqt -- mysql -ppassword -e "show databases;": exit status 1 (352.6755ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-197400 exec mysql-64454c8b5c-h2kqt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-197400 exec mysql-64454c8b5c-h2kqt -- mysql -ppassword -e "show databases;": exit status 1 (1.0347332s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-197400 exec mysql-64454c8b5c-h2kqt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-197400 exec mysql-64454c8b5c-h2kqt -- mysql -ppassword -e "show databases;": exit status 1 (276.0241ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-197400 exec mysql-64454c8b5c-h2kqt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (70.91s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/8496/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /etc/test/nested/copy/8496/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /etc/test/nested/copy/8496/hosts": (10.4082605s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (60.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/8496.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /etc/ssl/certs/8496.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /etc/ssl/certs/8496.pem": (10.3828753s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/8496.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /usr/share/ca-certificates/8496.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /usr/share/ca-certificates/8496.pem": (10.4531261s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /etc/ssl/certs/51391683.0": (9.9702986s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/84962.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /etc/ssl/certs/84962.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /etc/ssl/certs/84962.pem": (10.2828892s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/84962.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /usr/share/ca-certificates/84962.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /usr/share/ca-certificates/84962.pem": (10.0092557s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.7955987s)
--- PASS: TestFunctional/parallel/CertSync (60.90s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-197400 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (9.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-197400 ssh "sudo systemctl is-active crio": exit status 1 (9.8046531s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:34:24.984094    5964 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (9.81s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.1072376s)
--- PASS: TestFunctional/parallel/License (3.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-197400 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-197400 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-nfdq7" [403bfa76-2775-4268-ba6a-6bc59ae380d5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-nfdq7" [403bfa76-2775-4268-ba6a-6bc59ae380d5] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.0211509s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (11.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.1067678s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (11.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (10.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (10.5879665s)
functional_test.go:1311: Took "10.5881325s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "337.6281ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (10.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 service list: (14.1190684s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (11.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (11.2797403s)
functional_test.go:1362: Took "11.2798224s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "287.536ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (11.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 service list -o json: (14.6422208s)
functional_test.go:1490: Took "14.642324s" to run "out/minikube-windows-amd64.exe -p functional-197400 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.64s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (44.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-197400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-197400"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-197400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-197400": (29.5206471s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-197400 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-197400 docker-env | Invoke-Expression ; docker images": (15.2810585s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (44.82s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 update-context --alsologtostderr -v=2: (2.622873s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 update-context --alsologtostderr -v=2: (2.6041615s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 update-context --alsologtostderr -v=2: (2.6492252s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 version --short
--- PASS: TestFunctional/parallel/Version/short (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 version -o=json --components: (8.5133069s)
--- PASS: TestFunctional/parallel/Version/components (8.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (8.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image ls --format short --alsologtostderr: (8.3022295s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-197400 image ls --format short --alsologtostderr:
registry.k8s.io/pause:3.9
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-197400
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-197400 image ls --format short --alsologtostderr:
W0429 11:36:44.338944    6712 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0429 11:36:44.444955    6712 out.go:291] Setting OutFile to fd 1276 ...
I0429 11:36:44.445948    6712 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:36:44.445948    6712 out.go:304] Setting ErrFile to fd 1148...
I0429 11:36:44.445948    6712 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:36:44.475975    6712 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 11:36:44.476967    6712 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 11:36:44.477958    6712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
I0429 11:36:47.025960    6712 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 11:36:47.026063    6712 main.go:141] libmachine: [stderr =====>] : 
I0429 11:36:47.041669    6712 ssh_runner.go:195] Run: systemctl --version
I0429 11:36:47.041669    6712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
I0429 11:36:49.564172    6712 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 11:36:49.564252    6712 main.go:141] libmachine: [stderr =====>] : 
I0429 11:36:49.564397    6712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
I0429 11:36:52.294637    6712 main.go:141] libmachine: [stdout =====>] : 172.26.179.82

                                                
                                                
I0429 11:36:52.294637    6712 main.go:141] libmachine: [stderr =====>] : 
I0429 11:36:52.295773    6712 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
I0429 11:36:52.421463    6712 ssh_runner.go:235] Completed: systemctl --version: (5.3797513s)
I0429 11:36:52.432449    6712 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (8.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image ls --format table --alsologtostderr: (7.8738674s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-197400 image ls --format table --alsologtostderr:
|-----------------------------------------|-------------------|---------------|--------|
|                  Image                  |        Tag        |   Image ID    |  Size  |
|-----------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                 | alpine            | f4215f6ee683f | 48.3MB |
| registry.k8s.io/kube-proxy              | v1.30.0           | a0bf559e280cf | 84.7MB |
| docker.io/library/mysql                 | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/google-containers/addon-resizer  | functional-197400 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver              | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                 | latest            | 7383c266ef252 | 188MB  |
| registry.k8s.io/kube-apiserver          | v1.30.0           | c42f13656d0b2 | 117MB  |
| registry.k8s.io/kube-controller-manager | v1.30.0           | c7aad43836fa5 | 111MB  |
| registry.k8s.io/kube-scheduler          | v1.30.0           | 259c8277fcbbc | 62MB   |
| registry.k8s.io/etcd                    | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                   | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                | 6e38f40d628db | 31.5MB |
|-----------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-197400 image ls --format table --alsologtostderr:
W0429 11:36:52.630239   14228 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0429 11:36:52.715241   14228 out.go:291] Setting OutFile to fd 1340 ...
I0429 11:36:52.731936   14228 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:36:52.731936   14228 out.go:304] Setting ErrFile to fd 1364...
I0429 11:36:52.731936   14228 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:36:52.750279   14228 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 11:36:52.750952   14228 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 11:36:52.751565   14228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
I0429 11:36:55.028068   14228 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 11:36:55.028750   14228 main.go:141] libmachine: [stderr =====>] : 
I0429 11:36:55.044926   14228 ssh_runner.go:195] Run: systemctl --version
I0429 11:36:55.044926   14228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
I0429 11:36:57.405859   14228 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 11:36:57.406505   14228 main.go:141] libmachine: [stderr =====>] : 
I0429 11:36:57.406566   14228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
I0429 11:37:00.183974   14228 main.go:141] libmachine: [stdout =====>] : 172.26.179.82

                                                
                                                
I0429 11:37:00.183974   14228 main.go:141] libmachine: [stderr =====>] : 
I0429 11:37:00.184735   14228 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
I0429 11:37:00.286639   14228 ssh_runner.go:235] Completed: systemctl --version: (5.2416713s)
I0429 11:37:00.302596   14228 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image ls --format json --alsologtostderr: (7.9718412s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-197400 image ls --format json --alsologtostderr:
[{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e
112b1b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-197400"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s
-minikube/storage-provisioner:v5"],"size":"31500000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-197400 image ls --format json --alsologtostderr:
W0429 11:36:52.507389   13656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0429 11:36:52.601267   13656 out.go:291] Setting OutFile to fd 1344 ...
I0429 11:36:52.602229   13656 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:36:52.602229   13656 out.go:304] Setting ErrFile to fd 1340...
I0429 11:36:52.602229   13656 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:36:52.620241   13656 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 11:36:52.621244   13656 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 11:36:52.622235   13656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
I0429 11:36:54.924144   13656 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 11:36:54.924235   13656 main.go:141] libmachine: [stderr =====>] : 
I0429 11:36:54.942794   13656 ssh_runner.go:195] Run: systemctl --version
I0429 11:36:54.943767   13656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
I0429 11:36:57.359470   13656 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 11:36:57.359470   13656 main.go:141] libmachine: [stderr =====>] : 
I0429 11:36:57.359553   13656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
I0429 11:37:00.132267   13656 main.go:141] libmachine: [stdout =====>] : 172.26.179.82

                                                
                                                
I0429 11:37:00.133162   13656 main.go:141] libmachine: [stderr =====>] : 
I0429 11:37:00.133833   13656 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
I0429 11:37:00.238469   13656 ssh_runner.go:235] Completed: systemctl --version: (5.2946601s)
I0429 11:37:00.250834   13656 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (8.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image ls --format yaml --alsologtostderr: (8.1705895s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-197400 image ls --format yaml --alsologtostderr:
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-197400
size: "32900000"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-197400 image ls --format yaml --alsologtostderr:
W0429 11:36:44.338944   12776 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0429 11:36:44.444955   12776 out.go:291] Setting OutFile to fd 1220 ...
I0429 11:36:44.445948   12776 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:36:44.445948   12776 out.go:304] Setting ErrFile to fd 1192...
I0429 11:36:44.445948   12776 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:36:44.462972   12776 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 11:36:44.462972   12776 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 11:36:44.463951   12776 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
I0429 11:36:46.965858   12776 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 11:36:46.965858   12776 main.go:141] libmachine: [stderr =====>] : 
I0429 11:36:46.981842   12776 ssh_runner.go:195] Run: systemctl --version
I0429 11:36:46.981842   12776 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
I0429 11:36:49.467296   12776 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 11:36:49.467296   12776 main.go:141] libmachine: [stderr =====>] : 
I0429 11:36:49.467296   12776 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
I0429 11:36:52.179993   12776 main.go:141] libmachine: [stdout =====>] : 172.26.179.82

                                                
                                                
I0429 11:36:52.180156   12776 main.go:141] libmachine: [stderr =====>] : 
I0429 11:36:52.181104   12776 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
I0429 11:36:52.283223   12776 ssh_runner.go:235] Completed: systemctl --version: (5.3013386s)
I0429 11:36:52.299805   12776 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (8.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (28.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-197400 ssh pgrep buildkitd: exit status 1 (10.4354171s)

                                                
                                                
** stderr ** 
	W0429 11:36:44.340943    3392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image build -t localhost/my-image:functional-197400 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image build -t localhost/my-image:functional-197400 testdata\build --alsologtostderr: (10.3295116s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-197400 image build -t localhost/my-image:functional-197400 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in e24a65e176ff
---> Removed intermediate container e24a65e176ff
---> 746e227c940a
Step 3/3 : ADD content.txt /
---> b55b5fa59545
Successfully built b55b5fa59545
Successfully tagged localhost/my-image:functional-197400
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-197400 image build -t localhost/my-image:functional-197400 testdata\build --alsologtostderr:
W0429 11:36:54.768511    5756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0429 11:36:54.858031    5756 out.go:291] Setting OutFile to fd 1396 ...
I0429 11:36:54.876618    5756 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:36:54.876618    5756 out.go:304] Setting ErrFile to fd 1400...
I0429 11:36:54.876618    5756 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:36:54.902327    5756 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 11:36:54.926142    5756 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 11:36:54.927223    5756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
I0429 11:36:57.406666    5756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 11:36:57.406696    5756 main.go:141] libmachine: [stderr =====>] : 
I0429 11:36:57.423180    5756 ssh_runner.go:195] Run: systemctl --version
I0429 11:36:57.423180    5756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-197400 ).state
I0429 11:36:59.730422    5756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0429 11:36:59.730422    5756 main.go:141] libmachine: [stderr =====>] : 
I0429 11:36:59.730727    5756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-197400 ).networkadapters[0]).ipaddresses[0]
I0429 11:37:02.262308    5756 main.go:141] libmachine: [stdout =====>] : 172.26.179.82

                                                
                                                
I0429 11:37:02.262308    5756 main.go:141] libmachine: [stderr =====>] : 
I0429 11:37:02.262462    5756 sshutil.go:53] new ssh client: &{IP:172.26.179.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-197400\id_rsa Username:docker}
I0429 11:37:02.359168    5756 ssh_runner.go:235] Completed: systemctl --version: (4.9359499s)
I0429 11:37:02.359370    5756 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3906467974.tar
I0429 11:37:02.375309    5756 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0429 11:37:02.416604    5756 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3906467974.tar
I0429 11:37:02.424639    5756 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3906467974.tar: stat -c "%s %y" /var/lib/minikube/build/build.3906467974.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3906467974.tar': No such file or directory
I0429 11:37:02.424869    5756 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3906467974.tar --> /var/lib/minikube/build/build.3906467974.tar (3072 bytes)
I0429 11:37:02.507859    5756 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3906467974
I0429 11:37:02.539605    5756 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3906467974 -xf /var/lib/minikube/build/build.3906467974.tar
I0429 11:37:02.560454    5756 docker.go:360] Building image: /var/lib/minikube/build/build.3906467974
I0429 11:37:02.570920    5756 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-197400 /var/lib/minikube/build/build.3906467974
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0429 11:37:04.869404    5756 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-197400 /var/lib/minikube/build/build.3906467974: (2.298405s)
I0429 11:37:04.883164    5756 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3906467974
I0429 11:37:04.916059    5756 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3906467974.tar
I0429 11:37:04.938357    5756 build_images.go:217] Built localhost/my-image:functional-197400 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3906467974.tar
I0429 11:37:04.938556    5756 build_images.go:133] succeeded building to: functional-197400
I0429 11:37:04.938640    5756 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image ls: (7.3288904s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (28.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.7701196s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-197400
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image load --daemon gcr.io/google-containers/addon-resizer:functional-197400 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image load --daemon gcr.io/google-containers/addon-resizer:functional-197400 --alsologtostderr: (16.8499556s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image ls: (7.3125954s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image load --daemon gcr.io/google-containers/addon-resizer:functional-197400 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image load --daemon gcr.io/google-containers/addon-resizer:functional-197400 --alsologtostderr: (11.6266512s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image ls: (8.0236613s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-197400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-197400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-197400 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-197400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2044: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 13440: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (27.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.157011s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-197400
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image load --daemon gcr.io/google-containers/addon-resizer:functional-197400 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image load --daemon gcr.io/google-containers/addon-resizer:functional-197400 --alsologtostderr: (15.324664s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image ls: (7.8607912s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (27.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-197400 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-197400 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [00aa158e-f390-43da-9462-63a418ce3c5b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [00aa158e-f390-43da-9462-63a418ce3c5b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.0156872s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-197400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 12428: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image save gcr.io/google-containers/addon-resizer:functional-197400 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image save gcr.io/google-containers/addon-resizer:functional-197400 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.9922929s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (14.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image rm gcr.io/google-containers/addon-resizer:functional-197400 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image rm gcr.io/google-containers/addon-resizer:functional-197400 --alsologtostderr: (7.4949811s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image ls: (7.2965873s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (14.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
E0429 11:36:27.440560    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.6556702s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image ls: (7.2602688s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-197400
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-197400 image save --daemon gcr.io/google-containers/addon-resizer:functional-197400 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-197400 image save --daemon gcr.io/google-containers/addon-resizer:functional-197400 --alsologtostderr: (8.8918877s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-197400
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.26s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-197400
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-197400: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "gcr.io/google-containers/addon-resizer:functional-197400" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-197400": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-197400
functional_test.go:197: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-197400: context deadline exceeded (0s)
functional_test.go:199: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-197400": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-197400
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-197400: context deadline exceeded (0s)
functional_test.go:207: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-197400": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (717.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-437800 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0429 11:41:10.663109    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 11:41:27.447088    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 11:42:24.751028    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:42:24.766176    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:42:24.781495    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:42:24.812821    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:42:24.859573    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:42:24.954683    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:42:25.126912    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:42:25.460853    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:42:26.109796    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:42:27.393262    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:42:29.967484    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:42:35.102438    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:42:45.356696    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:43:05.839143    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:43:46.800384    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:45:08.728175    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:46:27.450194    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 11:47:24.751947    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 11:47:52.581940    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-437800 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m20.110948s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 status -v=7 --alsologtostderr: (36.9520792s)
--- PASS: TestMultiControlPlane/serial/StartCluster (717.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-437800 -- rollout status deployment/busybox: (3.7848579s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-dsnxf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-dsnxf -- nslookup kubernetes.io: (2.2464616s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-kxn7k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-ndzvx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-dsnxf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-kxn7k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-ndzvx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-dsnxf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-kxn7k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-437800 -- exec busybox-fc5497c4f-ndzvx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (255.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-437800 -v=7 --alsologtostderr
E0429 11:52:24.751673    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-437800 -v=7 --alsologtostderr: (3m26.4955896s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 status -v=7 --alsologtostderr: (48.7936952s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (255.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-437800 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (28.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0429 11:56:27.459459    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (28.7407622s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (28.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (637.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 status --output json -v=7 --alsologtostderr
E0429 11:57:24.755359    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 status --output json -v=7 --alsologtostderr: (48.9193119s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp testdata\cp-test.txt ha-437800:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp testdata\cp-test.txt ha-437800:/home/docker/cp-test.txt: (9.5738555s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test.txt": (9.6256417s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile939459166\001\cp-test_ha-437800.txt
E0429 11:57:50.682471    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile939459166\001\cp-test_ha-437800.txt: (9.7049756s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test.txt": (9.6149641s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800:/home/docker/cp-test.txt ha-437800-m02:/home/docker/cp-test_ha-437800_ha-437800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800:/home/docker/cp-test.txt ha-437800-m02:/home/docker/cp-test_ha-437800_ha-437800-m02.txt: (16.7868457s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test.txt": (9.5527876s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test_ha-437800_ha-437800-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test_ha-437800_ha-437800-m02.txt": (9.5875162s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800:/home/docker/cp-test.txt ha-437800-m03:/home/docker/cp-test_ha-437800_ha-437800-m03.txt
E0429 11:58:47.954232    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800:/home/docker/cp-test.txt ha-437800-m03:/home/docker/cp-test_ha-437800_ha-437800-m03.txt: (16.7213803s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test.txt": (9.5226852s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test_ha-437800_ha-437800-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test_ha-437800_ha-437800-m03.txt": (9.5371992s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800:/home/docker/cp-test.txt ha-437800-m04:/home/docker/cp-test_ha-437800_ha-437800-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800:/home/docker/cp-test.txt ha-437800-m04:/home/docker/cp-test_ha-437800_ha-437800-m04.txt: (16.6354835s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test.txt": (9.6587079s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test_ha-437800_ha-437800-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test_ha-437800_ha-437800-m04.txt": (9.5967792s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp testdata\cp-test.txt ha-437800-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp testdata\cp-test.txt ha-437800-m02:/home/docker/cp-test.txt: (9.5458047s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test.txt": (9.6108676s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile939459166\001\cp-test_ha-437800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile939459166\001\cp-test_ha-437800-m02.txt: (9.5881472s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test.txt": (9.5479971s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m02:/home/docker/cp-test.txt ha-437800:/home/docker/cp-test_ha-437800-m02_ha-437800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m02:/home/docker/cp-test.txt ha-437800:/home/docker/cp-test_ha-437800-m02_ha-437800.txt: (16.7195945s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test.txt": (9.7086874s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test_ha-437800-m02_ha-437800.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test_ha-437800-m02_ha-437800.txt": (9.6306266s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m02:/home/docker/cp-test.txt ha-437800-m03:/home/docker/cp-test_ha-437800-m02_ha-437800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m02:/home/docker/cp-test.txt ha-437800-m03:/home/docker/cp-test_ha-437800-m02_ha-437800-m03.txt: (17.2403767s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test.txt"
E0429 12:01:27.453201    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test.txt": (9.9836128s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test_ha-437800-m02_ha-437800-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test_ha-437800-m02_ha-437800-m03.txt": (9.7157141s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m02:/home/docker/cp-test.txt ha-437800-m04:/home/docker/cp-test_ha-437800-m02_ha-437800-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m02:/home/docker/cp-test.txt ha-437800-m04:/home/docker/cp-test_ha-437800-m02_ha-437800-m04.txt: (17.0331538s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test.txt": (9.8169144s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test_ha-437800-m02_ha-437800-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test_ha-437800-m02_ha-437800-m04.txt": (9.8157402s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp testdata\cp-test.txt ha-437800-m03:/home/docker/cp-test.txt
E0429 12:02:24.761907    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp testdata\cp-test.txt ha-437800-m03:/home/docker/cp-test.txt: (9.6635954s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test.txt": (9.6044031s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile939459166\001\cp-test_ha-437800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile939459166\001\cp-test_ha-437800-m03.txt: (9.6779628s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test.txt": (9.6580153s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m03:/home/docker/cp-test.txt ha-437800:/home/docker/cp-test_ha-437800-m03_ha-437800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m03:/home/docker/cp-test.txt ha-437800:/home/docker/cp-test_ha-437800-m03_ha-437800.txt: (16.7883646s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test.txt": (9.6470155s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test_ha-437800-m03_ha-437800.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test_ha-437800-m03_ha-437800.txt": (9.7013067s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m03:/home/docker/cp-test.txt ha-437800-m02:/home/docker/cp-test_ha-437800-m03_ha-437800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m03:/home/docker/cp-test.txt ha-437800-m02:/home/docker/cp-test_ha-437800-m03_ha-437800-m02.txt: (16.8410981s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test.txt": (9.6699011s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test_ha-437800-m03_ha-437800-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test_ha-437800-m03_ha-437800-m02.txt": (9.6225477s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m03:/home/docker/cp-test.txt ha-437800-m04:/home/docker/cp-test_ha-437800-m03_ha-437800-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m03:/home/docker/cp-test.txt ha-437800-m04:/home/docker/cp-test_ha-437800-m03_ha-437800-m04.txt: (16.749082s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test.txt": (9.6944612s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test_ha-437800-m03_ha-437800-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test_ha-437800-m03_ha-437800-m04.txt": (9.5716119s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp testdata\cp-test.txt ha-437800-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp testdata\cp-test.txt ha-437800-m04:/home/docker/cp-test.txt: (9.6862076s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test.txt": (9.6589154s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile939459166\001\cp-test_ha-437800-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile939459166\001\cp-test_ha-437800-m04.txt: (9.6633849s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test.txt": (9.5313348s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m04:/home/docker/cp-test.txt ha-437800:/home/docker/cp-test_ha-437800-m04_ha-437800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m04:/home/docker/cp-test.txt ha-437800:/home/docker/cp-test_ha-437800-m04_ha-437800.txt: (16.570281s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test.txt": (9.696855s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test_ha-437800-m04_ha-437800.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800 "sudo cat /home/docker/cp-test_ha-437800-m04_ha-437800.txt": (9.9264341s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m04:/home/docker/cp-test.txt ha-437800-m02:/home/docker/cp-test_ha-437800-m04_ha-437800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m04:/home/docker/cp-test.txt ha-437800-m02:/home/docker/cp-test_ha-437800-m04_ha-437800-m02.txt: (17.095741s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test.txt"
E0429 12:06:27.457032    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test.txt": (9.6757032s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test_ha-437800-m04_ha-437800-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m02 "sudo cat /home/docker/cp-test_ha-437800-m04_ha-437800-m02.txt": (9.7190424s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m04:/home/docker/cp-test.txt ha-437800-m03:/home/docker/cp-test_ha-437800-m04_ha-437800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 cp ha-437800-m04:/home/docker/cp-test.txt ha-437800-m03:/home/docker/cp-test_ha-437800-m04_ha-437800-m03.txt: (16.8405425s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m04 "sudo cat /home/docker/cp-test.txt": (9.6300356s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test_ha-437800-m04_ha-437800-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 ssh -n ha-437800-m03 "sudo cat /home/docker/cp-test_ha-437800-m04_ha-437800-m03.txt": (9.7174564s)
--- PASS: TestMultiControlPlane/serial/CopyFile (637.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (75.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 node stop m02 -v=7 --alsologtostderr
E0429 12:07:24.749159    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-437800 node stop m02 -v=7 --alsologtostderr: (36.4991638s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-437800 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-437800 status -v=7 --alsologtostderr: exit status 7 (39.2134273s)

                                                
                                                
-- stdout --
	ha-437800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-437800-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-437800-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-437800-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 12:07:53.554682   10192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 12:07:53.649299   10192 out.go:291] Setting OutFile to fd 1152 ...
	I0429 12:07:53.649964   10192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:07:53.649964   10192 out.go:304] Setting ErrFile to fd 1212...
	I0429 12:07:53.650046   10192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:07:53.663831   10192 out.go:298] Setting JSON to false
	I0429 12:07:53.663831   10192 mustload.go:65] Loading cluster: ha-437800
	I0429 12:07:53.663831   10192 notify.go:220] Checking for updates...
	I0429 12:07:53.664420   10192 config.go:182] Loaded profile config "ha-437800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:07:53.664420   10192 status.go:255] checking status of ha-437800 ...
	I0429 12:07:53.665597   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 12:07:55.926752   10192 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:07:55.926752   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:07:55.926752   10192 status.go:330] ha-437800 host status = "Running" (err=<nil>)
	I0429 12:07:55.926752   10192 host.go:66] Checking if "ha-437800" exists ...
	I0429 12:07:55.927721   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 12:07:58.155191   10192 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:07:58.155268   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:07:58.155345   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 12:08:00.928949   10192 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 12:08:00.928949   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:00.929744   10192 host.go:66] Checking if "ha-437800" exists ...
	I0429 12:08:00.954498   10192 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:08:00.954498   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800 ).state
	I0429 12:08:03.163937   10192 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:08:03.164229   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:03.164306   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800 ).networkadapters[0]).ipaddresses[0]
	I0429 12:08:05.890868   10192 main.go:141] libmachine: [stdout =====>] : 172.26.176.3
	
	I0429 12:08:05.890868   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:05.891360   10192 sshutil.go:53] new ssh client: &{IP:172.26.176.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800\id_rsa Username:docker}
	I0429 12:08:05.993022   10192 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0383219s)
	I0429 12:08:06.008205   10192 ssh_runner.go:195] Run: systemctl --version
	I0429 12:08:06.031322   10192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:08:06.067273   10192 kubeconfig.go:125] found "ha-437800" server: "https://172.26.191.254:8443"
	I0429 12:08:06.067273   10192 api_server.go:166] Checking apiserver status ...
	I0429 12:08:06.078267   10192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:08:06.129108   10192 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2046/cgroup
	W0429 12:08:06.151664   10192 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2046/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:08:06.168167   10192 ssh_runner.go:195] Run: ls
	I0429 12:08:06.177443   10192 api_server.go:253] Checking apiserver healthz at https://172.26.191.254:8443/healthz ...
	I0429 12:08:06.186601   10192 api_server.go:279] https://172.26.191.254:8443/healthz returned 200:
	ok
	I0429 12:08:06.186601   10192 status.go:422] ha-437800 apiserver status = Running (err=<nil>)
	I0429 12:08:06.187220   10192 status.go:257] ha-437800 status: &{Name:ha-437800 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:08:06.187220   10192 status.go:255] checking status of ha-437800-m02 ...
	I0429 12:08:06.187935   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m02 ).state
	I0429 12:08:08.340578   10192 main.go:141] libmachine: [stdout =====>] : Off
	
	I0429 12:08:08.340578   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:08.341033   10192 status.go:330] ha-437800-m02 host status = "Stopped" (err=<nil>)
	I0429 12:08:08.341033   10192 status.go:343] host is not running, skipping remaining checks
	I0429 12:08:08.341033   10192 status.go:257] ha-437800-m02 status: &{Name:ha-437800-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:08:08.341138   10192 status.go:255] checking status of ha-437800-m03 ...
	I0429 12:08:08.341849   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 12:08:10.539288   10192 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:08:10.539907   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:10.539907   10192 status.go:330] ha-437800-m03 host status = "Running" (err=<nil>)
	I0429 12:08:10.539907   10192 host.go:66] Checking if "ha-437800-m03" exists ...
	I0429 12:08:10.540673   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 12:08:12.809826   10192 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:08:12.809826   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:12.809826   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 12:08:15.510107   10192 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 12:08:15.510107   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:15.510173   10192 host.go:66] Checking if "ha-437800-m03" exists ...
	I0429 12:08:15.524393   10192 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:08:15.524393   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m03 ).state
	I0429 12:08:17.752642   10192 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:08:17.752642   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:17.752642   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m03 ).networkadapters[0]).ipaddresses[0]
	I0429 12:08:20.414337   10192 main.go:141] libmachine: [stdout =====>] : 172.26.177.113
	
	I0429 12:08:20.414337   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:20.414337   10192 sshutil.go:53] new ssh client: &{IP:172.26.177.113 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m03\id_rsa Username:docker}
	I0429 12:08:20.527986   10192 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0035005s)
	I0429 12:08:20.542364   10192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:08:20.586993   10192 kubeconfig.go:125] found "ha-437800" server: "https://172.26.191.254:8443"
	I0429 12:08:20.587221   10192 api_server.go:166] Checking apiserver status ...
	I0429 12:08:20.607409   10192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:08:20.665135   10192 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	W0429 12:08:20.694792   10192 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:08:20.708204   10192 ssh_runner.go:195] Run: ls
	I0429 12:08:20.717494   10192 api_server.go:253] Checking apiserver healthz at https://172.26.191.254:8443/healthz ...
	I0429 12:08:20.725956   10192 api_server.go:279] https://172.26.191.254:8443/healthz returned 200:
	ok
	I0429 12:08:20.726124   10192 status.go:422] ha-437800-m03 apiserver status = Running (err=<nil>)
	I0429 12:08:20.726161   10192 status.go:257] ha-437800-m03 status: &{Name:ha-437800-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:08:20.726161   10192 status.go:255] checking status of ha-437800-m04 ...
	I0429 12:08:20.726954   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m04 ).state
	I0429 12:08:22.921854   10192 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:08:22.922306   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:22.922306   10192 status.go:330] ha-437800-m04 host status = "Running" (err=<nil>)
	I0429 12:08:22.922306   10192 host.go:66] Checking if "ha-437800-m04" exists ...
	I0429 12:08:22.923166   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m04 ).state
	I0429 12:08:25.185212   10192 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:08:25.185212   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:25.186120   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m04 ).networkadapters[0]).ipaddresses[0]
	I0429 12:08:27.809170   10192 main.go:141] libmachine: [stdout =====>] : 172.26.187.66
	
	I0429 12:08:27.809170   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:27.809946   10192 host.go:66] Checking if "ha-437800-m04" exists ...
	I0429 12:08:27.823678   10192 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:08:27.823678   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-437800-m04 ).state
	I0429 12:08:29.911636   10192 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 12:08:29.912490   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:29.912490   10192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-437800-m04 ).networkadapters[0]).ipaddresses[0]
	I0429 12:08:32.469117   10192 main.go:141] libmachine: [stdout =====>] : 172.26.187.66
	
	I0429 12:08:32.469291   10192 main.go:141] libmachine: [stderr =====>] : 
	I0429 12:08:32.469942   10192 sshutil.go:53] new ssh client: &{IP:172.26.187.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-437800-m04\id_rsa Username:docker}
	I0429 12:08:32.569114   10192 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7453987s)
	I0429 12:08:32.583090   10192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:08:32.614281   10192 status.go:257] ha-437800-m04 status: &{Name:ha-437800-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (75.71s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (203.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-147200 --driver=hyperv
E0429 12:12:24.765307    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 12:14:30.694711    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-147200 --driver=hyperv: (3m23.9091012s)
--- PASS: TestImageBuild/serial/Setup (203.91s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-147200
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-147200: (9.6885673s)
--- PASS: TestImageBuild/serial/NormalBuild (9.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-147200
E0429 12:15:27.965318    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-147200: (9.1008158s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.10s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-147200
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-147200: (7.6772699s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-147200
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-147200: (7.601542s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (244.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-657400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0429 12:17:24.760224    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-657400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (4m4.2145622s)
--- PASS: TestJSONOutput/start/Command (244.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.94s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-657400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-657400 --output=json --user=testUser: (7.9418034s)
--- PASS: TestJSONOutput/pause/Command (7.94s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.9s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-657400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-657400 --output=json --user=testUser: (7.9041092s)
--- PASS: TestJSONOutput/unpause/Command (7.90s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (39.53s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-657400 --output=json --user=testUser
E0429 12:21:27.464554    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-657400 --output=json --user=testUser: (39.5338675s)
--- PASS: TestJSONOutput/stop/Command (39.53s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.59s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-036500 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-036500 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (319.9742ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"17af0545-90d6-4934-a93d-8570d1022173","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-036500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fea453ef-5447-4521-b1cf-3e53aa62ee1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"4e75a634-8d1f-4a2a-82bc-74dc3a8aa297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4778394c-ff39-49e0-a77c-5b6ce50a6c4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"afc777d5-f2ef-417d-8b61-ecd24355f9bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18756"}}
	{"specversion":"1.0","id":"fdff927a-4967-41df-950b-413f46d2eaef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"843c1c25-2632-4c17-a856-a9d1c4c92271","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 12:21:46.131334   13128 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-036500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-036500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-036500: (1.2684456s)
--- PASS: TestErrorJSONOutput (1.59s)

                                                
                                    
x
+
TestMainNoArgs (0.28s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.28s)

                                                
                                    
x
+
TestMinikubeProfile (528.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-574100 --driver=hyperv
E0429 12:22:24.771226    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-574100 --driver=hyperv: (3m17.2759508s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-574100 --driver=hyperv
E0429 12:26:27.473698    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 12:27:24.758202    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-574100 --driver=hyperv: (3m19.1266094s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-574100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.1120452s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-574100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.1455555s)
helpers_test.go:175: Cleaning up "second-574100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-574100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-574100: (46.19833s)
helpers_test.go:175: Cleaning up "first-574100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-574100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-574100: (46.2941797s)
--- PASS: TestMinikubeProfile (528.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (156.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-694400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0429 12:31:10.703412    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 12:31:27.474111    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 12:32:07.981505    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 12:32:24.771677    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-694400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m35.3572039s)
--- PASS: TestMountStart/serial/StartWithMountFirst (156.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.62s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-694400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-694400 ssh -- ls /minikube-host: (9.6198172s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (157.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-694400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-694400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m36.2596605s)
--- PASS: TestMountStart/serial/StartWithMountSecond (157.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.6s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-694400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-694400 ssh -- ls /minikube-host: (9.5968645s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.60s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (27.84s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-694400 --alsologtostderr -v=5
E0429 12:36:27.476338    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-694400 --alsologtostderr -v=5: (27.8368208s)
--- PASS: TestMountStart/serial/DeleteFirst (27.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.6s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-694400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-694400 ssh -- ls /minikube-host: (9.5949829s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.60s)

                                                
                                    
x
+
TestMountStart/serial/Stop (26.6s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-694400
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-694400: (26.5955688s)
--- PASS: TestMountStart/serial/Stop (26.60s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (434.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-409200 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0429 12:41:27.481157    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 12:42:24.771760    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 12:46:27.478058    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 12:47:24.782203    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
E0429 12:47:50.725828    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-409200 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m50.5477373s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 status --alsologtostderr: (24.2357446s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (434.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- rollout status deployment/busybox: (2.4845253s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- exec busybox-fc5497c4f-gr44t -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- exec busybox-fc5497c4f-gr44t -- nslookup kubernetes.io: (1.9147464s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- exec busybox-fc5497c4f-xvm2v -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- exec busybox-fc5497c4f-gr44t -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- exec busybox-fc5497c4f-xvm2v -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- exec busybox-fc5497c4f-gr44t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-409200 -- exec busybox-fc5497c4f-xvm2v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (229.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-409200 -v 3 --alsologtostderr
E0429 12:51:27.487640    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 12:52:24.778945    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-409200 -v 3 --alsologtostderr: (3m13.1722443s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 status --alsologtostderr: (36.0614037s)
--- PASS: TestMultiNode/serial/AddNode (229.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-409200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (9.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.7000248s)
--- PASS: TestMultiNode/serial/ProfileList (9.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (365.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 status --output json --alsologtostderr: (35.9417782s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 cp testdata\cp-test.txt multinode-409200:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 cp testdata\cp-test.txt multinode-409200:/home/docker/cp-test.txt: (9.5118133s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200 "sudo cat /home/docker/cp-test.txt": (9.5724044s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2311671446\001\cp-test_multinode-409200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2311671446\001\cp-test_multinode-409200.txt: (9.5411381s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200 "sudo cat /home/docker/cp-test.txt": (9.4959391s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200:/home/docker/cp-test.txt multinode-409200-m02:/home/docker/cp-test_multinode-409200_multinode-409200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200:/home/docker/cp-test.txt multinode-409200-m02:/home/docker/cp-test_multinode-409200_multinode-409200-m02.txt: (16.5578389s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200 "sudo cat /home/docker/cp-test.txt": (9.5170453s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m02 "sudo cat /home/docker/cp-test_multinode-409200_multinode-409200-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m02 "sudo cat /home/docker/cp-test_multinode-409200_multinode-409200-m02.txt": (9.4831791s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200:/home/docker/cp-test.txt multinode-409200-m03:/home/docker/cp-test_multinode-409200_multinode-409200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200:/home/docker/cp-test.txt multinode-409200-m03:/home/docker/cp-test_multinode-409200_multinode-409200-m03.txt: (16.5116239s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200 "sudo cat /home/docker/cp-test.txt": (9.4688878s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m03 "sudo cat /home/docker/cp-test_multinode-409200_multinode-409200-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m03 "sudo cat /home/docker/cp-test_multinode-409200_multinode-409200-m03.txt": (9.4891661s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 cp testdata\cp-test.txt multinode-409200-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 cp testdata\cp-test.txt multinode-409200-m02:/home/docker/cp-test.txt: (9.6109226s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m02 "sudo cat /home/docker/cp-test.txt"
E0429 12:56:27.481826    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m02 "sudo cat /home/docker/cp-test.txt": (9.5273246s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2311671446\001\cp-test_multinode-409200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2311671446\001\cp-test_multinode-409200-m02.txt: (9.4905305s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m02 "sudo cat /home/docker/cp-test.txt": (9.6001695s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200-m02:/home/docker/cp-test.txt multinode-409200:/home/docker/cp-test_multinode-409200-m02_multinode-409200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200-m02:/home/docker/cp-test.txt multinode-409200:/home/docker/cp-test_multinode-409200-m02_multinode-409200.txt: (16.5434469s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m02 "sudo cat /home/docker/cp-test.txt": (9.4383532s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200 "sudo cat /home/docker/cp-test_multinode-409200-m02_multinode-409200.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200 "sudo cat /home/docker/cp-test_multinode-409200-m02_multinode-409200.txt": (9.5766714s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200-m02:/home/docker/cp-test.txt multinode-409200-m03:/home/docker/cp-test_multinode-409200-m02_multinode-409200-m03.txt
E0429 12:57:24.772927    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200-m02:/home/docker/cp-test.txt multinode-409200-m03:/home/docker/cp-test_multinode-409200-m02_multinode-409200-m03.txt: (16.4936936s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m02 "sudo cat /home/docker/cp-test.txt": (9.6704819s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m03 "sudo cat /home/docker/cp-test_multinode-409200-m02_multinode-409200-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m03 "sudo cat /home/docker/cp-test_multinode-409200-m02_multinode-409200-m03.txt": (9.5103665s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 cp testdata\cp-test.txt multinode-409200-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 cp testdata\cp-test.txt multinode-409200-m03:/home/docker/cp-test.txt: (9.6279644s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m03 "sudo cat /home/docker/cp-test.txt": (9.6917061s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2311671446\001\cp-test_multinode-409200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2311671446\001\cp-test_multinode-409200-m03.txt: (9.4945822s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m03 "sudo cat /home/docker/cp-test.txt": (9.4904955s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200-m03:/home/docker/cp-test.txt multinode-409200:/home/docker/cp-test_multinode-409200-m03_multinode-409200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200-m03:/home/docker/cp-test.txt multinode-409200:/home/docker/cp-test_multinode-409200-m03_multinode-409200.txt: (16.8425413s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m03 "sudo cat /home/docker/cp-test.txt": (9.6287971s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200 "sudo cat /home/docker/cp-test_multinode-409200-m03_multinode-409200.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200 "sudo cat /home/docker/cp-test_multinode-409200-m03_multinode-409200.txt": (9.6438193s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200-m03:/home/docker/cp-test.txt multinode-409200-m02:/home/docker/cp-test_multinode-409200-m03_multinode-409200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 cp multinode-409200-m03:/home/docker/cp-test.txt multinode-409200-m02:/home/docker/cp-test_multinode-409200-m03_multinode-409200-m02.txt: (16.6851523s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m03 "sudo cat /home/docker/cp-test.txt": (9.6637652s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m02 "sudo cat /home/docker/cp-test_multinode-409200-m03_multinode-409200-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 ssh -n multinode-409200-m02 "sudo cat /home/docker/cp-test_multinode-409200-m03_multinode-409200-m02.txt": (9.7134352s)
--- PASS: TestMultiNode/serial/CopyFile (365.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (77.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-409200 node stop m03: (25.3077824s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-409200 status: exit status 7 (26.3387248s)

                                                
                                                
-- stdout --
	multinode-409200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-409200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-409200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 13:00:14.870400    4532 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-409200 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-409200 status --alsologtostderr: exit status 7 (26.0826107s)

                                                
                                                
-- stdout --
	multinode-409200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-409200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-409200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 13:00:41.204285   13184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 13:00:41.286284   13184 out.go:291] Setting OutFile to fd 1284 ...
	I0429 13:00:41.287306   13184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:00:41.287306   13184 out.go:304] Setting ErrFile to fd 1528...
	I0429 13:00:41.287306   13184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:00:41.301283   13184 out.go:298] Setting JSON to false
	I0429 13:00:41.302279   13184 mustload.go:65] Loading cluster: multinode-409200
	I0429 13:00:41.302279   13184 notify.go:220] Checking for updates...
	I0429 13:00:41.302279   13184 config.go:182] Loaded profile config "multinode-409200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:00:41.302279   13184 status.go:255] checking status of multinode-409200 ...
	I0429 13:00:41.303283   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:00:43.493490   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:00:43.493570   13184 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:00:43.493570   13184 status.go:330] multinode-409200 host status = "Running" (err=<nil>)
	I0429 13:00:43.493809   13184 host.go:66] Checking if "multinode-409200" exists ...
	I0429 13:00:43.494817   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:00:45.720417   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:00:45.720865   13184 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:00:45.720931   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:00:48.351101   13184 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 13:00:48.351101   13184 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:00:48.351720   13184 host.go:66] Checking if "multinode-409200" exists ...
	I0429 13:00:48.366198   13184 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:00:48.366198   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200 ).state
	I0429 13:00:50.450496   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:00:50.450496   13184 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:00:50.450496   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200 ).networkadapters[0]).ipaddresses[0]
	I0429 13:00:53.120114   13184 main.go:141] libmachine: [stdout =====>] : 172.26.185.116
	
	I0429 13:00:53.120114   13184 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:00:53.120601   13184 sshutil.go:53] new ssh client: &{IP:172.26.185.116 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200\id_rsa Username:docker}
	I0429 13:00:53.229002   13184 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8627668s)
	I0429 13:00:53.248630   13184 ssh_runner.go:195] Run: systemctl --version
	I0429 13:00:53.275750   13184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:00:53.304238   13184 kubeconfig.go:125] found "multinode-409200" server: "https://172.26.185.116:8443"
	I0429 13:00:53.304238   13184 api_server.go:166] Checking apiserver status ...
	I0429 13:00:53.317069   13184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:00:53.361450   13184 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2065/cgroup
	W0429 13:00:53.384335   13184 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2065/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 13:00:53.398066   13184 ssh_runner.go:195] Run: ls
	I0429 13:00:53.407265   13184 api_server.go:253] Checking apiserver healthz at https://172.26.185.116:8443/healthz ...
	I0429 13:00:53.417221   13184 api_server.go:279] https://172.26.185.116:8443/healthz returned 200:
	ok
	I0429 13:00:53.417221   13184 status.go:422] multinode-409200 apiserver status = Running (err=<nil>)
	I0429 13:00:53.417221   13184 status.go:257] multinode-409200 status: &{Name:multinode-409200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:00:53.417221   13184 status.go:255] checking status of multinode-409200-m02 ...
	I0429 13:00:53.418088   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:00:55.514966   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:00:55.516090   13184 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:00:55.516090   13184 status.go:330] multinode-409200-m02 host status = "Running" (err=<nil>)
	I0429 13:00:55.516090   13184 host.go:66] Checking if "multinode-409200-m02" exists ...
	I0429 13:00:55.516940   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:00:57.631799   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:00:57.631799   13184 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:00:57.632430   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 13:01:00.219063   13184 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 13:01:00.219141   13184 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:00.219141   13184 host.go:66] Checking if "multinode-409200-m02" exists ...
	I0429 13:01:00.234159   13184 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:01:00.234159   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m02 ).state
	I0429 13:01:02.353149   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0429 13:01:02.353149   13184 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:02.353149   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-409200-m02 ).networkadapters[0]).ipaddresses[0]
	I0429 13:01:04.898434   13184 main.go:141] libmachine: [stdout =====>] : 172.26.183.208
	
	I0429 13:01:04.898434   13184 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:04.898962   13184 sshutil.go:53] new ssh client: &{IP:172.26.183.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-409200-m02\id_rsa Username:docker}
	I0429 13:01:04.996168   13184 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7618959s)
	I0429 13:01:05.009694   13184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:01:05.035913   13184 status.go:257] multinode-409200-m02 status: &{Name:multinode-409200-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:01:05.035913   13184 status.go:255] checking status of multinode-409200-m03 ...
	I0429 13:01:05.037168   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-409200-m03 ).state
	I0429 13:01:07.129328   13184 main.go:141] libmachine: [stdout =====>] : Off
	
	I0429 13:01:07.129328   13184 main.go:141] libmachine: [stderr =====>] : 
	I0429 13:01:07.129328   13184 status.go:330] multinode-409200-m03 host status = "Stopped" (err=<nil>)
	I0429 13:01:07.129328   13184 status.go:343] host is not running, skipping remaining checks
	I0429 13:01:07.129328   13184 status.go:257] multinode-409200-m03 status: &{Name:multinode-409200-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (77.73s)

                                                
                                    
x
+
TestPreload (532.32s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-161000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0429 13:16:27.486774    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 13:17:24.791615    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-161000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m29.9495191s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-161000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-161000 image pull gcr.io/k8s-minikube/busybox: (8.4813875s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-161000
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-161000: (40.6897103s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-161000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0429 13:21:10.768645    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
E0429 13:21:27.494050    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-161000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m42.8448764s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-161000 image list
E0429 13:22:08.015762    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-161000 image list: (7.3957442s)
helpers_test.go:175: Cleaning up "test-preload-161000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-161000
E0429 13:22:24.785135    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-161000: (42.9534491s)
--- PASS: TestPreload (532.32s)

                                                
                                    
x
+
TestScheduledStopWindows (335.45s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-809000 --memory=2048 --driver=hyperv
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-809000 --memory=2048 --driver=hyperv: (3m21.8518519s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-809000 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-809000 --schedule 5m: (10.8591185s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-809000 -n scheduled-stop-809000
E0429 13:26:27.492355    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-839400\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-809000 -n scheduled-stop-809000: exit status 1 (10.018121s)

                                                
                                                
** stderr ** 
	W0429 13:26:23.839370    7480 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-809000 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-809000 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.6290535s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-809000 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-809000 --schedule 5s: (10.8507802s)
E0429 13:27:24.798312    8496 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-197400\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-809000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-809000: exit status 7 (2.4214704s)

                                                
                                                
-- stdout --
	scheduled-stop-809000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 13:27:54.331799    7836 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-809000 -n scheduled-stop-809000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-809000 -n scheduled-stop-809000: exit status 7 (2.4090706s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 13:27:56.751222    9756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-809000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-809000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-809000: (27.4021034s)
--- PASS: TestScheduledStopWindows (335.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-899400 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-899400 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (427.494ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-899400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 13:28:26.586913    6212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                    

Test skip (30/190)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (263.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-197400 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-197400 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 5088: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (263.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-197400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-197400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0535558s)

                                                
                                                
-- stdout --
	* [functional-197400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:33:03.638938   13780 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 11:33:03.740935   13780 out.go:291] Setting OutFile to fd 900 ...
	I0429 11:33:03.741934   13780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:33:03.741934   13780 out.go:304] Setting ErrFile to fd 800...
	I0429 11:33:03.741934   13780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:33:03.769640   13780 out.go:298] Setting JSON to false
	I0429 11:33:03.775820   13780 start.go:129] hostinfo: {"hostname":"minikube6","uptime":31856,"bootTime":1714358527,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 11:33:03.775820   13780 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 11:33:03.782651   13780 out.go:177] * [functional-197400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 11:33:03.787250   13780 notify.go:220] Checking for updates...
	I0429 11:33:03.789596   13780 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:33:03.792104   13780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:33:03.795077   13780 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 11:33:03.798238   13780 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 11:33:03.802028   13780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:33:03.806058   13780 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:33:03.808122   13780 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-197400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-197400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0292665s)

                                                
                                                
-- stdout --
	* [functional-197400] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0429 11:32:58.596977   12420 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0429 11:32:58.689600   12420 out.go:291] Setting OutFile to fd 936 ...
	I0429 11:32:58.689600   12420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:32:58.689600   12420 out.go:304] Setting ErrFile to fd 620...
	I0429 11:32:58.689600   12420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:32:58.712606   12420 out.go:298] Setting JSON to false
	I0429 11:32:58.716608   12420 start.go:129] hostinfo: {"hostname":"minikube6","uptime":31851,"bootTime":1714358527,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0429 11:32:58.716608   12420 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 11:32:58.720610   12420 out.go:177] * [functional-197400] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0429 11:32:58.724608   12420 notify.go:220] Checking for updates...
	I0429 11:32:58.726614   12420 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0429 11:32:58.729610   12420 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:32:58.731605   12420 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0429 11:32:58.734598   12420 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 11:32:58.737607   12420 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:32:58.742594   12420 config.go:182] Loaded profile config "functional-197400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 11:32:58.743601   12420 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard